[{"question": "What does a factor analysis tell you", "positive_ctxs": [{"text": "Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. Factor analysis aims to find independent latent variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "You are allowed to select k of these n boxes all at once and break them open simultaneously, gaining access to k keys. What is the probability that using these keys you can open all n boxes, where you use a found key to open the box it belongs to and repeat."}, {"text": "Interpreting factor analysis is based on using a \"heuristic\", which is a solution that is \"convenient even if not absolutely true\". More than one interpretation can be made of the same data factored the same way, and factor analysis cannot identify causality."}, {"text": "Higher-order factor analysis is a statistical method consisting of repeating steps factor analysis \u2013 oblique rotation \u2013 factor analysis of rotated factors. Its merit is to enable the researcher to see the hierarchical structure of studied phenomena. To interpret the results, one proceeds either by post-multiplying the primary factor pattern matrix by the higher-order factor pattern matrices (Gorsuch, 1983) and perhaps applying a Varimax rotation to the result (Thompson, 1990) or by using a Schmid-Leiman solution (SLS, Schmid & Leiman, 1957, also known as Schmid-Leiman transformation) which attributes the variation from the primary factors to the second-order factors."}, {"text": "The initial development of common factor analysis with multiple factors was given by Louis Thurstone in two papers in the early 1930s, summarized in his 1935 book, The Vector of Mind. Thurstone introduced several important factor analysis concepts, including communality, uniqueness, and rotation. He advocated for \"simple structure\", and developed methods of rotation that could be used as a way to achieve such structure.In Q methodology, Stephenson, a student of Spearman, distinguish between R factor analysis, oriented toward the study of inter-individual differences, and Q factor analysis oriented toward subjective intra-individual differences.Raymond Cattell was a strong advocate of factor analysis and psychometrics and used Thurstone's multi-factor theory to explain intelligence."}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influence a child's cognitive functioning?). Then you can still do research, but it is not causal, it is correlational."}]}, {"question": "Does a qualitative study have variables", "positive_ctxs": [{"text": "In qualitative research no hypotheses or relationships of variables are tested. Because variables must be defined numerically in hypothesis-testing research, they cannot reflect subjective experience. This leads to hypothesis-generating research using the grounded theory method to study subjective experience directly."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In PCA, it is common that we want to introduce qualitative variables as supplementary elements. For example, many quantitative variables have been measured on plants. For these plants, some qualitative variables are available as, for example, the species to which the plant belongs."}, {"text": "In PCA, it is common that we want to introduce qualitative variables as supplementary elements. For example, many quantitative variables have been measured on plants. For these plants, some qualitative variables are available as, for example, the species to which the plant belongs."}, {"text": "In PCA, it is common that we want to introduce qualitative variables as supplementary elements. For example, many quantitative variables have been measured on plants. For these plants, some qualitative variables are available as, for example, the species to which the plant belongs."}, {"text": "In PCA, it is common that we want to introduce qualitative variables as supplementary elements. For example, many quantitative variables have been measured on plants. For these plants, some qualitative variables are available as, for example, the species to which the plant belongs."}, {"text": "Quantitative regressors in regression models often have an interaction among each other. In the same way, qualitative regressors, or dummies, can also have interaction effects between each other, and these interactions can be depicted in the regression model. For example, in a regression involving determination of wages, if two qualitative variables are considered, namely, gender and marital status, there could be an interaction between marital status and gender."}, {"text": "Quantitative regressors in regression models often have an interaction among each other. In the same way, qualitative regressors, or dummies, can also have interaction effects between each other, and these interactions can be depicted in the regression model. For example, in a regression involving determination of wages, if two qualitative variables are considered, namely, gender and marital status, there could be an interaction between marital status and gender."}, {"text": "Categorical variables represent a qualitative method of scoring data (i.e. represents categories or group membership). These can be included as independent variables in a regression analysis or as dependent variables in logistic regression or probit regression, but must be converted to quantitative data in order to be able to analyze the data."}]}, {"question": "What is the difference between descriptive and inferential statistics quizlet", "positive_ctxs": [{"text": "Explain the difference between descriptive and inferential statistics. Descriptive statistics describes sets of data. Inferential statistics draws conclusions about the sets of data based on sampling. A population is a set of units of interest to a study."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features from a collection of information, while descriptive statistics (in the mass noun sense) is the process of using and analysing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics) by its aim to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent. This generally means that descriptive statistics, unlike inferential statistics, is not developed on the basis of probability theory, and are frequently non-parametric statistics."}, {"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent."}, {"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent."}, {"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent."}, {"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent."}, {"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent."}, {"text": "The partition of sums of squares is a concept that permeates much of inferential statistics and descriptive statistics. More properly, it is the partitioning of sums of squared deviations or errors. Mathematically, the sum of squared deviations is an unscaled, or unadjusted measure of dispersion (also called variability)."}]}, {"question": "Which method is best for smoothing of data", "positive_ctxs": [{"text": "Exponential Smoothing is one of the more popular smoothing techniques due to its flexibility, ease in calculation, and good performance. Exponential Smoothing uses a simple average calculation to assign exponentially decreasing weights starting with the most recent observations."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Newton's method requires the 2nd order derivatives, so for each iteration, the number of function calls is in the order of N\u00b2, but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself."}, {"text": "For every exponential smoothing method we also need to choose the value for the smoothing parameters. For simple exponential smoothing, there is only one smoothing parameter (\u03b1), but for the methods that follow there is usually more than one smoothing parameter."}, {"text": "will generally be small but not necessarily zero. Which of these regimes is more relevant depends on the specific data set at hand."}, {"text": "Validations are not made on a \"more is better\" assumption (higher stated prevalence of the behavior of interest) when selecting the best method for reducing SDB as this is a \"weak validation\" that does not always guarantee the best results. Instead, ground \"truthed\" comparisons of observed data to stated data should reveal the most accurate method."}, {"text": "There are cases where the smoothing parameters may be chosen in a subjective manner \u2013 the forecaster specifies the value of the smoothing parameters based on previous experience. However, a more robust and objective way to obtain values for the unknown parameters included in any exponential smoothing method is to estimate them from the observed data."}, {"text": "This way of regularizing naive Bayes is called Laplace smoothing when the pseudocount is one, and Lidstone smoothing in the general case."}, {"text": "This way of regularizing naive Bayes is called Laplace smoothing when the pseudocount is one, and Lidstone smoothing in the general case."}]}, {"question": "How do you handle multi label classification", "positive_ctxs": [{"text": "Basically, there are three methods to solve a multi-label classification problem, namely: Problem Transformation. Adapted Algorithm.1 Binary Relevance. This is the simplest technique, which basically treats each label as a separate single class classification problem. 2 Classifier Chains. 3 Label Powerset."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Transformation into multi-class classification problem: The label powerset (LP) transformation creates one binary classifier for every label combination present in the training set. For example, if possible labels for an example were A, B, and C, the label powerset representation of this problem is a multi-class classification problem with the classes [0 0 0], [1 0 0], [0 1 0], [0 0 1], [1 1 0], [1 0 1], [0 1 1]. [1 1 1] where for example [1 0 1] denotes an example where labels A and C are present and label B is absent."}, {"text": "Transformation into multi-class classification problem: The label powerset (LP) transformation creates one binary classifier for every label combination present in the training set. For example, if possible labels for an example were A, B, and C, the label powerset representation of this problem is a multi-class classification problem with the classes [0 0 0], [1 0 0], [0 1 0], [0 0 1], [1 1 0], [1 0 1], [0 1 1]. [1 1 1] where for example [1 0 1] denotes an example where labels A and C are present and label B is absent."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "What are the types of sampling errors", "positive_ctxs": [{"text": "Five Common Types of Sampling ErrorsPopulation Specification Error\u2014This error occurs when the researcher does not understand who they should survey. Sample Frame Error\u2014A frame error occurs when the wrong sub-population is used to select a sample.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A distinction, albeit not universally accepted, of sampling bias is that it undermines the external validity of a test (the ability of its results to be generalized to the entire population), while selection bias mainly addresses internal validity for differences or similarities found in the sample at hand. In this sense, errors occurring in the process of gathering the sample or cohort cause sampling bias, while errors in any process thereafter cause selection bias."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "In statistics, sampling errors are incurred when the statistical characteristics of a population are estimated from a subset, or sample, of that population. Since the sample does not include all members of the population, statistics of the sample (often known as estimators), such as means and quartiles, generally differ from the statistics of the entire population (known as parameters). The difference between the sample statistic and population parameter is considered the sampling error."}, {"text": "The term \"Observational error\" is also sometimes used to refer to response errors and some other types of non-sampling error. In survey-type situations, these errors can be mistakes in the collection of data, including both the incorrect recording of a response and the correct recording of a respondent's inaccurate response. These sources of non-sampling error are discussed in Salant and Dillman (1994) and Bland and Altman (1996).These errors can be random or systematic."}, {"text": "The term \"Observational error\" is also sometimes used to refer to response errors and some other types of non-sampling error. In survey-type situations, these errors can be mistakes in the collection of data, including both the incorrect recording of a response and the correct recording of a respondent's inaccurate response. These sources of non-sampling error are discussed in Salant and Dillman (1994) and Bland and Altman (1996).These errors can be random or systematic."}, {"text": "The term \"Observational error\" is also sometimes used to refer to response errors and some other types of non-sampling error. In survey-type situations, these errors can be mistakes in the collection of data, including both the incorrect recording of a response and the correct recording of a respondent's inaccurate response. These sources of non-sampling error are discussed in Salant and Dillman (1994) and Bland and Altman (1996).These errors can be random or systematic."}, {"text": "Within any of the types of frames identified above, a variety of sampling methods can be employed, individually or in combination. Factors commonly influencing the choice between these designs include:"}]}, {"question": "Is Word2Vec deep learning", "positive_ctxs": [{"text": "The Word2Vec Model This model was created by Google in 2013 and is a predictive deep learning based model to compute and generate high quality, distributed and continuous dense vector representations of words, which capture contextual and semantic similarity."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "Consequential \u2013 What are the potential risks if the scores are invalid or inappropriately interpreted? Is the test still worthwhile given the risks?"}, {"text": "Consequential \u2013 What are the potential risks if the scores are invalid or inappropriately interpreted? Is the test still worthwhile given the risks?"}, {"text": "This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google DeepMind increased attention to deep reinforcement learning or end-to-end reinforcement learning."}, {"text": "This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google DeepMind increased attention to deep reinforcement learning or end-to-end reinforcement learning."}, {"text": "Is the yield of good cookies affected by the baking temperature and time in the oven? The table shows data for 8 batches of cookies."}]}, {"question": "What is the difference between the law of large numbers and the law of averages", "positive_ctxs": [{"text": "The law of averages is not a mathematical principle, whereas the law of large numbers is. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "There are two different versions of the law of large numbers that are described below. They are called the strong law of large numbers and the weak law of large numbers. Stated for the case where X1, X2, ... is an infinite sequence of independent and identically distributed (i.i.d.)"}, {"text": "This law is remarkable because it is not assumed in the foundations of probability theory, but instead emerges from these foundations as a theorem. Since it links theoretically derived probabilities to their actual frequency of occurrence in the real world, the law of large numbers is considered as a pillar in the history of statistical theory and has had widespread influence.The law of large numbers (LLN) states that the sample average"}, {"text": "The presence of W in each summand of the objective function makes it difficult to apply the law of large numbers and the central limit theorem."}, {"text": "The presence of W in each summand of the objective function makes it difficult to apply the law of large numbers and the central limit theorem."}, {"text": "Benoit Mandelbrot distinguished between \"mild\" and \"wild\" risk and argued that risk assessment and management must be fundamentally different for the two types of risk. Mild risk follows normal or near-normal probability distributions, is subject to regression to the mean and the law of large numbers, and is therefore relatively predictable. Wild risk follows fat-tailed distributions, e.g., Pareto or power-law distributions, is subject to regression to the tail (infinite mean or variance, rendering the law of large numbers invalid or ineffective), and is therefore difficult or impossible to predict."}, {"text": "Another application of the law of averages is a belief that a sample's behaviour must line up with the expected value based on population statistics. For example, suppose a fair coin is flipped 100 times. Using the law of averages, one might predict that there will be 50 heads and 50 tails."}, {"text": "Another application of the law of averages is a belief that a sample's behaviour must line up with the expected value based on population statistics. For example, suppose a fair coin is flipped 100 times. Using the law of averages, one might predict that there will be 50 heads and 50 tails."}]}, {"question": "Why AI algorithms are biased", "positive_ctxs": [{"text": "Bias can enter into algorithmic systems as a result of pre-existing cultural, social, or institutional expectations; because of technical limitations of their design; or by being used in unanticipated contexts or by audiences who are not considered in the software's initial design."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "\"Marvin Minsky writes \"This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence? \"Nick Bostrom observes that \"A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore.\""}, {"text": "Widespread use of artificial intelligence could have unintended consequences that are dangerous or undesirable. Scientists from the Future of Life Institute, among others, described some short-term research goals to see how AI influences the economy, the laws and ethics that are involved with AI and how to minimize AI security risks. In the long-term, the scientists have proposed to continue optimizing function while minimizing possible security risks that come along with new technologies.Some are concerned about algorithmic bias, that AI programs may unintentionally become biased after processing data that exhibits bias."}, {"text": "Widespread use of artificial intelligence could have unintended consequences that are dangerous or undesirable. Scientists from the Future of Life Institute, among others, described some short-term research goals to see how AI influences the economy, the laws and ethics that are involved with AI and how to minimize AI security risks. In the long-term, the scientists have proposed to continue optimizing function while minimizing possible security risks that come along with new technologies.Some are concerned about algorithmic bias, that AI programs may unintentionally become biased after processing data that exhibits bias."}, {"text": "Widespread use of artificial intelligence could have unintended consequences that are dangerous or undesirable. Scientists from the Future of Life Institute, among others, described some short-term research goals to see how AI influences the economy, the laws and ethics that are involved with AI and how to minimize AI security risks. In the long-term, the scientists have proposed to continue optimizing function while minimizing possible security risks that come along with new technologies.Some are concerned about algorithmic bias, that AI programs may unintentionally become biased after processing data that exhibits bias."}, {"text": "\"The art of a right decision: Why decision makers want to know the odds-algorithm.\" Newsletter of the European Mathematical Society, Issue 62, 14\u201320, (2006)"}, {"text": "Akaike information criterion (AIC) method of model selection, and a comparison with MML: Dowe, D.L. ; Gardner, S.; Oppy, G. (Dec 2007). Why Simplicity is no Problem for Bayesians\"."}, {"text": "Additionally, this evaluation is biased towards algorithms that use the same cluster model. For example, k-means clustering naturally optimizes object distances, and a distance-based internal criterion will likely overrate the resulting clustering."}]}, {"question": "What does it mean to target someone", "positive_ctxs": [{"text": ": to aim an attack at someone or something. : to direct an action, message, etc., at someone or something."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}, {"text": "Perceived risk differences occur depending on how far or close a compared target is to an individual making a risk estimate. The greater the perceived distance between the self and the comparison target, the greater the perceived difference in risk. When one brings the comparison target closer to the individual, risk estimates appear closer together than if the comparison target was someone more distant to the participant."}, {"text": "With regards to the optimistic bias, when people compare themselves to an average person, whether someone of the same sex or age, the target continues to be viewed as less human and less personified, which will result in less favorable comparisons between the self and others."}, {"text": "In the study they asked participants to choose between a stroke and asthma as to which one someone was more likely to die from. The researchers concluded that it depended on what experiences were available to them. If they knew someone or heard of someone that died from one of the diseases that is the one they perceived to be a higher risk to pass away from."}, {"text": "Moreover, withdrawing may be also employed when someone know that the other party is totally engaged with hostility and does not want (can not) to invest further unreasonable efforts."}, {"text": "Thus, ambiguous sentences will take a shorter time to read compared to disambiguated sentences.This is referred to as the underspecification account as readers do not commit to a meaning when not provided with disambiguating information. The reader understands someone scratched herself but does not seek to determine whether it was the maid or the princess. This is also known as the \u201cgood-enough\u201d approach to understanding language."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}]}, {"question": "What are the 4 characteristics of a binomial distribution", "positive_ctxs": [{"text": "1: The number of observations n is fixed. 2: Each observation is independent. 3: Each observation represents one of two outcomes (\"success\" or \"failure\"). 4: The probability of \"success\" p is the same for each outcome."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than n, the binomial distribution remains a good approximation, and is widely used."}, {"text": "The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than n, the binomial distribution remains a good approximation, and is widely used."}, {"text": "The binomial distribution and beta distribution are different views of the same model of repeated Bernoulli trials. The binomial distribution is the PMF of k successes given n independent events each with a probability p of success."}, {"text": "The binomial distribution and beta distribution are different views of the same model of repeated Bernoulli trials. The binomial distribution is the PMF of k successes given n independent events each with a probability p of success."}, {"text": "Because of this, the negative binomial distribution is also known as the gamma\u2013Poisson (mixture) distribution. The negative binomial distribution was originally derived as a limiting case of the gamma-Poisson distribution."}, {"text": "Because of this, the negative binomial distribution is also known as the gamma\u2013Poisson (mixture) distribution. The negative binomial distribution was originally derived as a limiting case of the gamma-Poisson distribution."}, {"text": "In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes\u2013no question, and each with its own Boolean-valued outcome: success (with probability p) or failure (with probability q = 1 \u2212 p). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance."}]}, {"question": "How do you determine the number of neurons in the input layer", "positive_ctxs": [{"text": "The number of neurons in the input layer equals the number of input variables in the data being processed. The number of neurons in the output layer equals the number of outputs associated with each input."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The depth of the output volume controls the number of neurons in a layer that connect to the same region of the input volume. These neurons learn to activate for different features in the input. For example, if the first convolutional layer takes the raw image as input, then different neurons along the depth dimension may activate in the presence of various oriented edges, or blobs of color."}, {"text": "The depth of the output volume controls the number of neurons in a layer that connect to the same region of the input volume. These neurons learn to activate for different features in the input. For example, if the first convolutional layer takes the raw image as input, then different neurons along the depth dimension may activate in the presence of various oriented edges, or blobs of color."}, {"text": "The depth of the output volume controls the number of neurons in a layer that connect to the same region of the input volume. These neurons learn to activate for different features in the input. For example, if the first convolutional layer takes the raw image as input, then different neurons along the depth dimension may activate in the presence of various oriented edges, or blobs of color."}, {"text": "The depth of the output volume controls the number of neurons in a layer that connect to the same region of the input volume. These neurons learn to activate for different features in the input. For example, if the first convolutional layer takes the raw image as input, then different neurons along the depth dimension may activate in the presence of various oriented edges, or blobs of color."}, {"text": "The depth of the output volume controls the number of neurons in a layer that connect to the same region of the input volume. These neurons learn to activate for different features in the input. For example, if the first convolutional layer takes the raw image as input, then different neurons along the depth dimension may activate in the presence of various oriented edges, or blobs of color."}, {"text": "The depth of the output volume controls the number of neurons in a layer that connect to the same region of the input volume. These neurons learn to activate for different features in the input. For example, if the first convolutional layer takes the raw image as input, then different neurons along the depth dimension may activate in the presence of various oriented edges, or blobs of color."}, {"text": "The depth of the output volume controls the number of neurons in a layer that connect to the same region of the input volume. These neurons learn to activate for different features in the input. For example, if the first convolutional layer takes the raw image as input, then different neurons along the depth dimension may activate in the presence of various oriented edges, or blobs of color."}]}, {"question": "When X and Y are statistically independent then I xy is", "positive_ctxs": [{"text": "If two random variables X and Y are independent, then their covariance Cov(X, Y) = E(XY) \u2212 E(X)E(Y) = 0, that is, they are uncorrelated."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The sign of the Spearman correlation indicates the direction of association between X (the independent variable) and Y (the dependent variable). If Y tends to increase when X increases, the Spearman correlation coefficient is positive. If Y tends to decrease when X increases, the Spearman correlation coefficient is negative."}, {"text": "A Spearman correlation of zero indicates that there is no tendency for Y to either increase or decrease when X increases. The Spearman correlation increases in magnitude as X and Y become closer to being perfectly monotone functions of each other. When X and Y are perfectly monotonically related, the Spearman correlation coefficient becomes 1."}, {"text": "The joint entropy of two discrete random variables X and Y is merely the entropy of their pairing: (X, Y). This implies that if X and Y are independent, then their joint entropy is the sum of their individual entropies."}, {"text": "If X and Y are finite sets, then there exists a bijection between the two sets X and Y if and only if X and Y have the same number of elements. Indeed, in axiomatic set theory, this is taken as the definition of \"same number of elements\" (equinumerosity), and generalising this definition to infinite sets leads to the concept of cardinal number, a way to distinguish the various sizes of infinite sets."}, {"text": "When some object X is said to be embedded in another object Y, the embedding is given by some injective and structure-preserving map f : X \u2192 Y. The precise meaning of \"structure-preserving\" depends on the kind of mathematical structure of which X and Y are instances. In the terminology of category theory, a structure-preserving map is called a morphism."}, {"text": "When some object X is said to be embedded in another object Y, the embedding is given by some injective and structure-preserving map f : X \u2192 Y. The precise meaning of \"structure-preserving\" depends on the kind of mathematical structure of which X and Y are instances. In the terminology of category theory, a structure-preserving map is called a morphism."}, {"text": ", and can be interpreted as the ratio of the expected frequency that X occurs without Y (that is to say, the frequency that the rule makes an incorrect prediction) if X and Y were independent divided by the observed frequency of incorrect predictions. In this example, the conviction value of 1.2 shows that the rule"}]}, {"question": "What is artificial intelligence machine learning and deep learning", "positive_ctxs": [{"text": "Artificial intelligence is imparting a cognitive ability to a machine. The idea behind machine learning is that the machine can learn without human intervention. The machine needs to find a way to learn how to solve a task given the data. Deep learning is the breakthrough in the field of artificial intelligence."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory. GPT-3's full version has a capacity of 175 billion machine learning parameters."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who reminds engineers that \"There\u2019s nothing artificial about AI...It\u2019s inspired by people, it\u2019s created by people, and\u2014most importantly\u2014it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.\u201d"}, {"text": "Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who reminds engineers that \"There\u2019s nothing artificial about AI...It\u2019s inspired by people, it\u2019s created by people, and\u2014most importantly\u2014it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.\u201d"}, {"text": "Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who reminds engineers that \"There\u2019s nothing artificial about AI...It\u2019s inspired by people, it\u2019s created by people, and\u2014most importantly\u2014it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.\u201d"}, {"text": "Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who reminds engineers that \"There\u2019s nothing artificial about AI...It\u2019s inspired by people, it\u2019s created by people, and\u2014most importantly\u2014it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.\u201d"}]}, {"question": "Is it possible for unsupervised learning algorithms to outperform supervised ones", "positive_ctxs": [{"text": "Counterintuitive as it may be, supervised algorithms (particularly logistic regression and random forest) tend to outperform unsupervised ones on discrete classification and categorization tasks, where data is relatively structured and well-labeled."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "After discovering a process model and aligning the event log, it is possible to create basic supervised and unsupervised learning problems. For example, to predict the remaining processing time of a running case or to identify the root causes of compliance problems."}, {"text": "High-quality labeled training datasets for supervised and semi-supervised machine learning algorithms are usually difficult and expensive to produce because of the large amount of time needed to label the data. Although they do not need to be labeled, high-quality datasets for unsupervised learning can also be difficult and costly to produce."}, {"text": "High-quality labeled training datasets for supervised and semi-supervised machine learning algorithms are usually difficult and expensive to produce because of the large amount of time needed to label the data. Although they do not need to be labeled, high-quality datasets for unsupervised learning can also be difficult and costly to produce."}, {"text": "High-quality labeled training datasets for supervised and semi-supervised machine learning algorithms are usually difficult and expensive to produce because of the large amount of time needed to label the data. Although they do not need to be labeled, high-quality datasets for unsupervised learning can also be difficult and costly to produce."}, {"text": "High-quality labeled training datasets for supervised and semi-supervised machine learning algorithms are usually difficult and expensive to produce because of the large amount of time needed to label the data. Although they do not need to be labeled, high-quality datasets for unsupervised learning can also be difficult and costly to produce."}, {"text": "High-quality labeled training datasets for supervised and semi-supervised machine learning algorithms are usually difficult and expensive to produce because of the large amount of time needed to label the data. Although they do not need to be labeled, high-quality datasets for unsupervised learning can also be difficult and costly to produce."}, {"text": "High-quality labeled training datasets for supervised and semi-supervised machine learning algorithms are usually difficult and expensive to produce because of the large amount of time needed to label the data. Although they do not need to be labeled, high-quality datasets for unsupervised learning can also be difficult and costly to produce."}]}, {"question": "Where should I insert batch normalization", "positive_ctxs": [{"text": "You should put it after the non-linearity (eg. relu layer). If you are using dropout remember to use it before."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Recently, some scholars have argued that batch normalization does not reduce internal covariate shift, but rather smooths the objective function, which in turn improves the performance. However, at initialization, batch normalization in fact induces severe gradient explosion in deep networks, which is only alleviated by skip connections in residual networks. Others sustain that batch normalization achieves length-direction decoupling, and thereby accelerates neural networks.After batch norm, many other in-layer normalization methods have been introduced, such as instance normalization, layer normalization, group normalization."}, {"text": "The correlation between the gradients are computed for four models: a standard VGG network, a VGG network with batch normalization layers, a 25-layer deep linear network (DLN) trained with full-batch gradient descent, and a DLN network with batch normalization layers. Interestingly, it is shown that the standard VGG and DLN models both have higher correlations of gradients compared with their counterparts, indicating that the additional batch normalization layers are not reducing internal covariate shift."}, {"text": "Besides analyzing this correlation experimentally, theoretical analysis is also provided for verification that batch normalization could result in a smoother landscape. Consider two identical networks, one contains batch normalization layers and the other doesn't, the behaviors of these two networks are then compared. Denote the loss functions as"}, {"text": "The correlation between batch normalization and internal covariate shift is widely accepted but was not supported by experimental results. Scholars recently show with experiments that the hypothesized relationship is not an accurate one. Rather, the enhanced accuracy with the batch normalization layer seems to be independent of internal covariate shift."}, {"text": "To understand if there is any correlation between reducing covariate shift and improving performance, an experiment is performed to elucidate the relationship. Specifically, three models are trained and compared: a standard VGG network without batch normalization, a VGG network with batch normalization layers, and a VGG network with batch normalization layers and random noise. In the third model, the noise has non-zero mean and non-unit variance, and is generated at random for each layer."}, {"text": "Besides reducing internal covariate shift, batch normalization is believed to introduce many other benefits. With this additional operation, the network can use higher learning rate without vanishing or exploding gradients. Furthermore, batch normalization seems to have a regularizing effect such that the network improves its generalization properties, and it is thus unnecessary to use dropout to mitigate overfitting."}, {"text": "Moreover, the batch normalized models are compared with models with different normalization techniques. Specifically, these normalization methods work by first fixing the first order moment of activation, and then normalizing it by the average of the"}]}, {"question": "What is an example of the normal approximation of the binomial distribution", "positive_ctxs": [{"text": "For example, if n = 100 and p = 0.25 then we are justified in using the normal approximation. This is because np = 25 and n(1 - p) = 75. Since both of these numbers are greater than 10, the appropriate normal distribution will do a fairly good job of estimating binomial probabilities."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The Poisson distribution can be derived as a limiting case to the binomial distribution as the number of trials goes to infinity and the expected number of successes remains fixed \u2014 see law of rare events below. Therefore, it can be used as an approximation of the binomial distribution if n is sufficiently large and p is sufficiently small. There is a rule of thumb stating that the Poisson distribution is a good approximation of the binomial distribution if n is at least 20 and p is smaller than or equal to 0.05, and an excellent approximation if n \u2265 100 and np \u2264 10."}, {"text": "The equidensity contours of a non-singular multivariate normal distribution are ellipsoids (i.e. linear transformations of hyperspheres) centered at the mean. Hence the multivariate normal distribution is an example of the class of elliptical distributions."}, {"text": "In hydrology the distribution of long duration river discharge or rainfall, e.g. monthly and yearly totals, is often thought to be practically normal according to the central limit theorem. The blue picture, made with CumFreq, illustrates an example of fitting the normal distribution to ranked October rainfalls showing the 90% confidence belt based on the binomial distribution."}, {"text": "In hydrology the distribution of long duration river discharge or rainfall, e.g. monthly and yearly totals, is often thought to be practically normal according to the central limit theorem. The blue picture, made with CumFreq, illustrates an example of fitting the normal distribution to ranked October rainfalls showing the 90% confidence belt based on the binomial distribution."}, {"text": "In hydrology the distribution of long duration river discharge or rainfall, e.g. monthly and yearly totals, is often thought to be practically normal according to the central limit theorem. The blue picture, made with CumFreq, illustrates an example of fitting the normal distribution to ranked October rainfalls showing the 90% confidence belt based on the binomial distribution."}, {"text": "In hydrology the distribution of long duration river discharge or rainfall, e.g. monthly and yearly totals, is often thought to be practically normal according to the central limit theorem. The blue picture, made with CumFreq, illustrates an example of fitting the normal distribution to ranked October rainfalls showing the 90% confidence belt based on the binomial distribution."}, {"text": "In hydrology the distribution of long duration river discharge or rainfall, e.g. monthly and yearly totals, is often thought to be practically normal according to the central limit theorem. The blue picture, made with CumFreq, illustrates an example of fitting the normal distribution to ranked October rainfalls showing the 90% confidence belt based on the binomial distribution."}]}, {"question": "What are the benefits of hierarchical clustering over K means clustering", "positive_ctxs": [{"text": "Hierarchical clustering outputs a hierarchy, ie a structure that is more informa ve than the unstructured set of flat clusters returned by k-\u2010means. Therefore, it is easier to decide on the number of clusters by looking at the dendrogram (see sugges on on how to cut a dendrogram in lab8)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For high-dimensional data, many of the existing methods fail due to the curse of dimensionality, which renders particular distance functions problematic in high-dimensional spaces. This led to new clustering algorithms for high-dimensional data that focus on subspace clustering (where only some attributes are used, and cluster models include the relevant attributes for the cluster) and correlation clustering that also looks for arbitrary rotated (\"correlated\") subspace clusters that can be modeled by giving a correlation of their attributes. Examples for such clustering algorithms are CLIQUE and SUBCLU.Ideas from density-based clustering methods (in particular the DBSCAN/OPTICS family of algorithms) have been adapted to subspace clustering (HiSC, hierarchical subspace clustering and DiSH) and correlation clustering (HiCO, hierarchical correlation clustering, 4C using \"correlation connectivity\" and ERiC exploring hierarchical density-based correlation clusters)."}, {"text": "For high-dimensional data, many of the existing methods fail due to the curse of dimensionality, which renders particular distance functions problematic in high-dimensional spaces. This led to new clustering algorithms for high-dimensional data that focus on subspace clustering (where only some attributes are used, and cluster models include the relevant attributes for the cluster) and correlation clustering that also looks for arbitrary rotated (\"correlated\") subspace clusters that can be modeled by giving a correlation of their attributes. Examples for such clustering algorithms are CLIQUE and SUBCLU.Ideas from density-based clustering methods (in particular the DBSCAN/OPTICS family of algorithms) have been adapted to subspace clustering (HiSC, hierarchical subspace clustering and DiSH) and correlation clustering (HiCO, hierarchical correlation clustering, 4C using \"correlation connectivity\" and ERiC exploring hierarchical density-based correlation clusters)."}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}]}, {"question": "How does the Monty Hall problem work", "positive_ctxs": [{"text": "The monty hall problem has 3 doors instead of 100. It is still more likely that you pick a goat. If a person picks door 1 which is wrong the Monty Hall will close door 3 and give you chance to switch to the right answer, so it means they want always people win the prize."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Probability and the Monty Hall problem\", BBC News Magazine, 11 September 2013 (video). Mathematician Marcus du Sautoy explains the Monty Hall paradox."}, {"text": "The problem is actually an extrapolation from the game show. Monty Hall did open a wrong door to build excitement, but offered a known lesser prize \u2013 such as $100 cash \u2013 rather than a choice to switch doors. As Monty Hall wrote to Selvin:"}, {"text": "Steve Selvin posed the Monty Hall problem in a pair of letters to the American Statistician in 1975. The first letter presented the problem in a version close to its presentation in Parade 15 years later. The second appears to be the first use of the term \"Monty Hall problem\"."}, {"text": "The Monty Hall problem is a brain teaser, in the form of a probability puzzle, loosely based on the American television game show Let's Make a Deal and named after its original host, Monty Hall. The problem was originally posed (and solved) in a letter by Steve Selvin to the American Statistician in 1975. It became famous as a question from a reader's letter quoted in Marilyn vos Savant's \"Ask Marilyn\" column in Parade magazine in 1990:"}, {"text": "Mueser, Peter R. & Granberg, Donald (May 1999). \"The Monty Hall Dilemma Revisited: Understanding the Interaction of Problem Definition and Decision Making\"."}, {"text": "Herbranson, W. T. & Schroeder, J. \"Are birds smarter than mathematicians? Pigeons (Columba livia) perform optimally on a version of the Monty Hall Dilemma\"."}, {"text": "Vos Savant wrote in her first column on the Monty Hall problem that the player should switch. She received thousands of letters from her readers \u2013 the vast majority of which, including many from readers with PhDs, disagreed with her answer. During 1990\u20131991, three more of her columns in Parade were devoted to the paradox."}]}, {"question": "How do you do weightage to a variable", "positive_ctxs": [{"text": "To calculate how much weight you need, divide the known population percentage by the percent in the sample. For this example: Known population females (51) / Sample Females (41) = 51/41 = 1.24. Known population males (49) / Sample males (59) = 49/59 = ."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "Is age a continuous variable", "positive_ctxs": [{"text": "A variable is said to be continuous if it can assume an infinite number of real values. Examples of a continuous variable are distance, age and temperature. The measurement of a continuous variable is restricted by the methods used, or by the accuracy of the measuring instruments."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formally, a continuous random variable is a random variable whose cumulative distribution function is continuous everywhere. There are no \"gaps\", which would correspond to numbers which have a finite probability of occurring. Instead, continuous random variables almost never take an exact prescribed value c (formally,"}, {"text": "Formally, a continuous random variable is a random variable whose cumulative distribution function is continuous everywhere. There are no \"gaps\", which would correspond to numbers which have a finite probability of occurring. Instead, continuous random variables almost never take an exact prescribed value c (formally,"}, {"text": "is a step function, and a continuous random variable otherwise. This allows for continuous distributions that has a cumulative density function, but not a probability density function, such as the Cantor distribution."}, {"text": "is a step function, and a continuous random variable otherwise. This allows for continuous distributions that has a cumulative density function, but not a probability density function, such as the Cantor distribution."}, {"text": "is a step function, and a continuous random variable otherwise. This allows for continuous distributions that has a cumulative density function, but not a probability density function, such as the Cantor distribution."}, {"text": "is a step function, and a continuous random variable otherwise. This allows for continuous distributions that has a cumulative density function, but not a probability density function, such as the Cantor distribution."}, {"text": "Stratification: As in the example above, physical activity is thought to be a behaviour that protects from myocardial infarct; and age is assumed to be a possible confounder. The data sampled is then stratified by age group \u2013 this means that the association between activity and infarct would be analyzed per each age group. If the different age groups (or age strata) yield much different risk ratios, age must be viewed as a confounding variable."}]}, {"question": "Is median the same with second quartile", "positive_ctxs": [{"text": "The lower quartile, or first quartile, is denoted as Q1 and is the middle number that falls between the smallest value of the dataset and the median. The second quartile, Q2, is also the median."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The lower quartile value is the median of the lower half of the data. The upper quartile value is the median of the upper half of the data.This rule is employed by the TI-83 calculator boxplot and \"1-Var Stats\" functions."}, {"text": "The lower quartile value is the median of the lower half of the data. The upper quartile value is the median of the upper half of the data.This rule is employed by the TI-83 calculator boxplot and \"1-Var Stats\" functions."}, {"text": "The lower quartile value is the median of the lower half of the data. The upper quartile value is the median of the upper half of the data.The values found by this method are also known as \"Tukey's hinges\"; see also midhinge."}, {"text": "The lower quartile value is the median of the lower half of the data. The upper quartile value is the median of the upper half of the data.The values found by this method are also known as \"Tukey's hinges\"; see also midhinge."}, {"text": "The third quartile value is the number that marks three quarters of the ordered set. In other words, there are exactly 75% of the elements that are less than the first quartile and 25% of the elements that are greater. The third quartile value can be easily determined by finding the \"middle\" number between the median and the maximum."}, {"text": "So the first, second and third 4-quantiles (the \"quartiles\") of the dataset {3, 6, 7, 8, 8, 10, 13, 15, 16, 20} are {7, 9, 15}. If also required, the zeroth quartile is 3 and the fourth quartile is 20."}, {"text": "So the first, second and third 4-quantiles (the \"quartiles\") of the dataset {3, 6, 7, 8, 8, 10, 13, 15, 16, 20} are {7, 9, 15}. If also required, the zeroth quartile is 3 and the fourth quartile is 20."}]}, {"question": "What is residual analysis used for", "positive_ctxs": [{"text": "Residual analysis is used to assess the appropriateness of a linear regression model by defining residuals and examining the residual plot graphs."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "(but not Mammen's), this method assumes that the 'true' residual distribution is symmetric and can offer advantages over simple residual sampling for smaller sample sizes. Different forms are used for the random variable"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "What is the underlying framework used to represent knowledge? Semantic networks were one of the first knowledge representation primitives. Also, data structures and algorithms for general fast search."}, {"text": "The formula for the updated (a posteriori) estimate covariance above is valid for the optimal Kk gain that minimizes the residual error, in which form it is most widely used in applications. Proof of the formulae is found in the derivations section, where the formula valid for any Kk is also shown."}]}, {"question": "How do you know if a sample size is representative", "positive_ctxs": [{"text": "If you want a representative sample of a particular population, you need to ensure that:The sample source includes all the target population.The selected data collection method (online, phone, paper, in person) can reach individuals that represent that target population.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Suppose the police officers then stop a driver at random to administer a breathalyzer test. It indicates that the driver is drunk. We assume you do not know anything else about them."}, {"text": "If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute."}, {"text": "Therefore, if you see trousers, the most you can deduce is that you are looking at a single sample from a subset of students where 25% are girls. And by definition, chance of this random student being a girl is 25%. Every Bayes theorem problem can be solved in this way ."}, {"text": "Therefore, if you see trousers, the most you can deduce is that you are looking at a single sample from a subset of students where 25% are girls. And by definition, chance of this random student being a girl is 25%. Every Bayes theorem problem can be solved in this way ."}, {"text": "For example, if we want to measure current obesity levels in a population, we could draw a sample of 1,000 people randomly from that population (also known as a cross section of that population), measure their weight and height, and calculate what percentage of that sample is categorized as obese. This cross-sectional sample provides us with a snapshot of that population, at that one point in time. Note that we do not know based on one cross-sectional sample if obesity is increasing or decreasing; we can only describe the current proportion."}]}, {"question": "What is the difference between stratified random sampling and cluster sampling", "positive_ctxs": [{"text": "The main difference between stratified sampling and cluster sampling is that with cluster sampling, you have natural groups separating your population. With stratified random sampling, these breaks may not exist*, so you divide your target population into groups (more formally called \"strata\")."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "A common motivation of cluster sampling is to reduce costs by increasing sampling efficiency. This contrasts with stratified sampling where the motivation is to increase precision."}, {"text": "A common motivation of cluster sampling is to reduce costs by increasing sampling efficiency. This contrasts with stratified sampling where the motivation is to increase precision."}, {"text": "OversamplingChoice-based sampling is one of the stratified sampling strategies. In choice-based sampling, the data are stratified on the target and a sample is taken from each stratum so that the rare target class will be more represented in the sample. The model is then built on this biased sample."}, {"text": "OversamplingChoice-based sampling is one of the stratified sampling strategies. In choice-based sampling, the data are stratified on the target and a sample is taken from each stratum so that the rare target class will be more represented in the sample. The model is then built on this biased sample."}, {"text": "A common motivation for cluster sampling is to reduce the total number of interviews and costs given the desired accuracy. For a fixed sample size, the expected random error is smaller when most of the variation in the population is present internally within the groups, and not between the groups."}]}, {"question": "In what setting are z scores useful", "positive_ctxs": [{"text": "The standard score (more commonly referred to as a z-score) is a very useful statistic because it (a) allows us to calculate the probability of a score occurring within our normal distribution and (b) enables us to compare two scores that are from different normal distributions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is the Taylor series of the natural logarithm at z = 1. The Taylor series of ln(z) provides a particularly useful approximation to ln(1+z) when z is small, |z| < 1, since then"}, {"text": "Finding the best match for a test word z, involves placing z in the address register and finding the least distance d for which there is an occupied location. We can start the search by setting"}, {"text": "In recent decades, there has also been increasing interest in \"behavioral decision theory\", contributing to a re-evaluation of what useful decision-making requires."}, {"text": "No matter what a student scores on the original test, the best prediction of their score on the second test is 50."}, {"text": "If a regression of y is conducted upon x only, this last equation is what is estimated, and the regression coefficient on x is actually an estimate of (b + cf ), giving not simply an estimate of the desired direct effect of x upon y (which is b), but rather of its sum with the indirect effect (the effect f of x on z times the effect c of z on y). Thus by omitting the variable z from the regression, we have estimated the total derivative of y with respect to x rather than its partial derivative with respect to x. These differ if both c and f are non-zero."}, {"text": "For instance, in multivariable calculus, one often encounters functions of the form z = f(x,y), where z is a dependent variable and x and y are independent variables. Functions with multiple outputs are often referred to as vector-valued functions."}, {"text": "For instance, in multivariable calculus, one often encounters functions of the form z = f(x,y), where z is a dependent variable and x and y are independent variables. Functions with multiple outputs are often referred to as vector-valued functions."}]}, {"question": "Why is survival analysis used", "positive_ctxs": [{"text": "CONCLUSION. There are three primary goals of survival analysis, to estimate and interpret survival and / or hazard functions from the survival data; to compare survival and / or hazard functions, and to assess the relationship of explanatory variables to survival time."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "An alternative to building a single survival tree is to build many survival trees, where each tree is constructed using a sample of the data, and average the trees to predict survival. This is the method underlying the survival random forest models. Survival random forest analysis is available in the R package \"randomForestSRC\"."}, {"text": "An alternative to building a single survival tree is to build many survival trees, where each tree is constructed using a sample of the data, and average the trees to predict survival. This is the method underlying the survival random forest models. Survival random forest analysis is available in the R package \"randomForestSRC\"."}, {"text": "The survival function is a function that gives the probability that a patient, device, or other object of interest will survive beyond any specified time.The survival function is also known as the survivor function or reliability function.The term reliability function is common in engineering while the term survival function is used in a broader range of applications, including human mortality. Another name for the survival function is the complementary cumulative distribution function."}, {"text": "The randomForestSRC package includes an example survival random forest analysis using the data set pbc. This data is from the Mayo Clinic Primary Biliary Cirrhosis (PBC) trial of the liver conducted between 1974 and 1984. In the example, the random forest survival model gives more accurate predictions of survival than the Cox PH model."}, {"text": "The randomForestSRC package includes an example survival random forest analysis using the data set pbc. This data is from the Mayo Clinic Primary Biliary Cirrhosis (PBC) trial of the liver conducted between 1974 and 1984. In the example, the random forest survival model gives more accurate predictions of survival than the Cox PH model."}, {"text": "More generally, survival analysis involves the modelling of time to event data; in this context, death or failure is considered an \"event\" in the survival analysis literature \u2013 traditionally only a single event occurs for each subject, after which the organism or mechanism is dead or broken. Recurring event or repeated event models relax that assumption. The study of recurring events is relevant in systems reliability, and in many areas of social sciences and medical research."}, {"text": "More generally, survival analysis involves the modelling of time to event data; in this context, death or failure is considered an \"event\" in the survival analysis literature \u2013 traditionally only a single event occurs for each subject, after which the organism or mechanism is dead or broken. Recurring event or repeated event models relax that assumption. The study of recurring events is relevant in systems reliability, and in many areas of social sciences and medical research."}]}, {"question": "Why does data need to be normally distributed in parametric tests", "positive_ctxs": [{"text": "Every parametric test has the assumption that the sample means are following a normal distribution. This is the case if the sample itself is normal distributed or if approximately if the sample size is big enough."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The fifth issue, concerning the homogeneity of different treatment regression slopes is particularly important in evaluating the appropriateness of ANCOVA model. Also note that we only need the error terms to be normally distributed. In fact both the independent variable and the concomitant variables will not be normally distributed in most cases."}, {"text": "The fifth issue, concerning the homogeneity of different treatment regression slopes is particularly important in evaluating the appropriateness of ANCOVA model. Also note that we only need the error terms to be normally distributed. In fact both the independent variable and the concomitant variables will not be normally distributed in most cases."}, {"text": "So when the result of a statistical analysis is said to be an \u201cexact test\u201d or an \u201cexact p-value\u201d, it ought to imply that the test is defined without parametric assumptions and evaluated without using approximate algorithms. In principle however it could also mean that a parametric test has been employed in a situation where all parametric assumptions are fully met, but it is in most cases impossible to prove this completely in a real world situation. Exceptions when it is certain that parametric tests are exact include tests based on the binomial or Poisson distributions."}, {"text": "There are cases in which uncorrelatedness does imply independence. One of these cases is the one in which both random variables are two-valued (so each can be linearly transformed to have a Bernoulli distribution). Further, two jointly normally distributed random variables are independent if they are uncorrelated, although this does not hold for variables whose marginal distributions are normal and uncorrelated but whose joint distribution is not joint normal (see Normally distributed and uncorrelated does not imply independent)."}, {"text": "Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation \u03c72 test of association, a permutation version of Aly's test for comparing variances and so on."}, {"text": "Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation \u03c72 test of association, a permutation version of Aly's test for comparing variances and so on."}, {"text": "Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation \u03c72 test of association, a permutation version of Aly's test for comparing variances and so on."}]}, {"question": "What is Seq2Seq model", "positive_ctxs": [{"text": "A Seq2Seq model is a model that takes a sequence of items (words, letters, time series, etc) and outputs another sequence of items. The encoder captures the context of the input sequence in the form of a hidden state vector and sends it to the decoder, which then produces the output sequence."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}, {"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}]}, {"question": "How do you find the median in survival time", "positive_ctxs": [{"text": "Divide the number of subjects by 2, and round down. In the example 5 \u00f7 2 = 2.5 and rounding down gives 2. Find the first-ordered survival time that is greater than this number. This is the median survival time."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "In some cases, median survival cannot be determined from the graph. For example, for survival function 4, more than 50% of the subjects survive longer than the observation period of 10 months."}, {"text": "Typically, by far the majority of the computational effort and time is spent on calculating the median of each window. Because the filter must process every entry in the signal, for large signals such as images, the efficiency of this median calculation is a critical factor in determining how fast the algorithm can run. The na\u00efve implementation described above sorts every entry in the window to find the median; however, since only the middle value in a list of numbers is required, selection algorithms can be much more efficient."}, {"text": "For example, in a psychology test investigating the time needed to solve a problem, if a small number of people failed to solve the problem at all in the given time a median can still be calculated.Because the median is simple to understand and easy to calculate, while also a robust approximation to the mean, the median is a popular summary statistic in descriptive statistics. In this context, there are several choices for a measure of variability: the range, the interquartile range, the mean absolute deviation, and the median absolute deviation."}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}]}, {"question": "How is XGBoost different from gradient boosting", "positive_ctxs": [{"text": "While regular gradient boosting uses the loss function of our base model (e.g. decision tree) as a proxy for minimizing the error of the overall model, XGBoost uses the 2nd order derivative as an approximation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Like other boosting methods, gradient boosting combines weak \"learners\" into a single strong learner in an iterative fashion. It is easiest to explain in the least-squares regression setting, where the goal is to \"teach\" a model"}, {"text": "Like other boosting methods, gradient boosting combines weak \"learners\" into a single strong learner in an iterative fashion. It is easiest to explain in the least-squares regression setting, where the goal is to \"teach\" a model"}, {"text": "Like other boosting methods, gradient boosting combines weak \"learners\" into a single strong learner in an iterative fashion. It is easiest to explain in the least-squares regression setting, where the goal is to \"teach\" a model"}, {"text": "increases the margin of the loss. It is shown that this is directly equivalent to decreasing the learning rate in gradient boosting"}, {"text": "Gradient boosting can be used in the field of learning to rank. The commercial web search engines Yahoo and Yandex use variants of gradient boosting in their machine-learned ranking engines. Gradient boosting is also utilized in High Energy Physics in data analysis."}, {"text": "Gradient boosting can be used in the field of learning to rank. The commercial web search engines Yahoo and Yandex use variants of gradient boosting in their machine-learned ranking engines. Gradient boosting is also utilized in High Energy Physics in data analysis."}, {"text": "Gradient boosting can be used in the field of learning to rank. The commercial web search engines Yahoo and Yandex use variants of gradient boosting in their machine-learned ranking engines. Gradient boosting is also utilized in High Energy Physics in data analysis."}]}, {"question": "Why is there a degree of freedom of n 1 for sample standard deviation", "positive_ctxs": [{"text": "The reason n-1 is used is because that is the number of degrees of freedom in the sample. The sum of each value in a sample minus the mean must equal 0, so if you know what all the values except one are, you can calculate the value of the final one."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The use of the term n \u2212 1 is called Bessel's correction, and it is also used in sample covariance and the sample standard deviation (the square root of variance). The square root is a concave function and thus introduces negative bias (by Jensen's inequality), which depends on the distribution, and thus the corrected sample standard deviation (using Bessel's correction) is biased. The unbiased estimation of standard deviation is a technically involved problem, though for the normal distribution using the term n \u2212 1.5 yields an almost unbiased estimator."}, {"text": "The use of the term n \u2212 1 is called Bessel's correction, and it is also used in sample covariance and the sample standard deviation (the square root of variance). The square root is a concave function and thus introduces negative bias (by Jensen's inequality), which depends on the distribution, and thus the corrected sample standard deviation (using Bessel's correction) is biased. The unbiased estimation of standard deviation is a technically involved problem, though for the normal distribution using the term n \u2212 1.5 yields an almost unbiased estimator."}, {"text": "The use of the term n \u2212 1 is called Bessel's correction, and it is also used in sample covariance and the sample standard deviation (the square root of variance). The square root is a concave function and thus introduces negative bias (by Jensen's inequality), which depends on the distribution, and thus the corrected sample standard deviation (using Bessel's correction) is biased. The unbiased estimation of standard deviation is a technically involved problem, though for the normal distribution using the term n \u2212 1.5 yields an almost unbiased estimator."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "When the sample size is small, using the standard deviation of the sample instead of the true standard deviation of the population will tend to systematically underestimate the population standard deviation, and therefore also the standard error. With n = 2, the underestimate is about 25%, but for n = 6, the underestimate is only 5%. Gurland and Tripathi (1971) provide a correction and equation for this effect."}]}, {"question": "What are the advantages of statistics", "positive_ctxs": [{"text": "Statistical knowledge helps you use the proper methods to collect the data, employ the correct analyses, and effectively present the results. Statistics is a crucial process behind how we make discoveries in science, make decisions based on data, and make predictions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}]}, {"question": "What data is used for multiple linear regression", "positive_ctxs": [{"text": "Linear regression can only be used when one has two continuous variables\u2014an independent variable and a dependent variable. The independent variable is the parameter that is used to calculate the dependent variable or outcome. A multiple regression model extends to several explanatory variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A basic tool for econometrics is the multiple linear regression model. In modern econometrics, other statistical tools are frequently used, but linear regression is still the most frequently used starting point for an analysis. Estimating a linear regression on two variables can be visualised as fitting a line through data points representing paired values of the independent and dependent variables."}, {"text": "The general linear model or general multivariate regression model is simply a compact way of simultaneously writing several multiple linear regression models. In that sense it is not a separate statistical linear model. The various multiple linear regression models may be compactly written as"}, {"text": "The general linear model or general multivariate regression model is simply a compact way of simultaneously writing several multiple linear regression models. In that sense it is not a separate statistical linear model. The various multiple linear regression models may be compactly written as"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}]}, {"question": "What are the four elements of a descriptive statistics problem", "positive_ctxs": [{"text": "The four elements of a descriptive statistics problem include population/sample, tables/graphs, identifying patterns, and A. data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent."}, {"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent."}, {"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent."}, {"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent."}, {"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent."}, {"text": "When a sample consists of more than one variable, descriptive statistics may be used to describe the relationship between pairs of variables. In this case, descriptive statistics include:"}, {"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features from a collection of information, while descriptive statistics (in the mass noun sense) is the process of using and analysing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics) by its aim to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent. This generally means that descriptive statistics, unlike inferential statistics, is not developed on the basis of probability theory, and are frequently non-parametric statistics."}]}, {"question": "What is density and relative density", "positive_ctxs": [{"text": "The mass density (\u03c1) of a substance is the mass of one unit volume of the substance. The relative density is the ratio of the mass of the substance in air at 20 \u00b0C to that of an equal volume of water at the same temperature."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "As the adjacent bins leave no gaps, the rectangles of a histogram touch each other to indicate that the original variable is continuous.Histograms give a rough sense of the density of the underlying distribution of the data, and often for density estimation: estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all 1, then a histogram is identical to a relative frequency plot."}, {"text": "As the adjacent bins leave no gaps, the rectangles of a histogram touch each other to indicate that the original variable is continuous.Histograms give a rough sense of the density of the underlying distribution of the data, and often for density estimation: estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all 1, then a histogram is identical to a relative frequency plot."}, {"text": "has described how the magnetic field force of a current-bearing wire arises from this relative charge density. He used (p 260) a Minkowski diagram to show \"how a neutral current-bearing wire appears to carry a net charge density as observed in a moving frame.\" When a charge density is measured in a moving frame of reference it is called proper charge density.It turns out the charge density \u03c1 and current density J transform together as a four current vector under Lorentz transformations."}, {"text": "Minimum kurtosis takes place when the mass density is concentrated equally at each end (and therefore the mean is at the center), and there is no probability mass density in between the ends."}, {"text": "(The density is non-constant because of a non-constant angle between the sphere and the plane.) The density of X may be calculated by integration,"}, {"text": "If either \u03b1 or \u03b2 approaches infinity (and the other is finite) all the probability density is concentrated at an end, and the probability density is zero everywhere else. If both shape parameters are equal (the symmetric case), \u03b1 = \u03b2, and they approach infinity simultaneously, the probability density becomes a spike (Dirac delta function) concentrated at the middle x = 1/2, and hence there is 100% probability at the middle x = 1/2 and zero probability everywhere else."}, {"text": "A density of 100% (19/19) is the greatest density in the system. A density of 5% indicates there is only 1 of 19 possible connections."}]}, {"question": "What is the comparison mean for a paired sample t test", "positive_ctxs": [{"text": "The Paired Samples t Test compares two means that are from the same individual, object, or related units. The two means can represent things like: A measurement taken at two different times (e.g., pre-test and post-test with an intervention administered between the two time points)5 p\u00e4iv\u00e4\u00e4 sitten"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Welch's t test assumes the least and is therefore the most commonly used test in a two-sample hypothesis test where the mean of a metric is to be optimized. While the mean of the variable to be optimized is the most common choice of estimator, others are regularly used."}, {"text": "Welch's t test assumes the least and is therefore the most commonly used test in a two-sample hypothesis test where the mean of a metric is to be optimized. While the mean of the variable to be optimized is the most common choice of estimator, others are regularly used."}, {"text": "The Wilcoxon signed-rank test is a non-parametric statistical hypothesis test used to compare two related samples, matched samples, or repeated measurements on a single sample to assess whether their population mean ranks differ (i.e. it is a paired difference test). It can be used as an alternative to the paired Student's t-test (also known as \"t-test for matched pairs\" or \"t-test for dependent samples\") when the distribution of the difference between two samples' means cannot be assumed to be normally distributed."}, {"text": "The common example scenario for when a paired difference test is appropriate is when a single set of test subjects has something applied to them and the test is intended to check for an effect."}, {"text": "Dummy coding is used when there is a control or comparison group in mind. One is therefore analyzing the data of one group in relation to the comparison group: a represents the mean of the control group and b is the difference between the mean of the experimental group and the mean of the control group. It is suggested that three criteria be met for specifying a suitable control group: the group should be a well-established group (e.g."}, {"text": "Dummy coding is used when there is a control or comparison group in mind. One is therefore analyzing the data of one group in relation to the comparison group: a represents the mean of the control group and b is the difference between the mean of the experimental group and the mean of the control group. It is suggested that three criteria be met for specifying a suitable control group: the group should be a well-established group (e.g."}, {"text": "Dummy coding is used when there is a control or comparison group in mind. One is therefore analyzing the data of one group in relation to the comparison group: a represents the mean of the control group and b is the difference between the mean of the experimental group and the mean of the control group. It is suggested that three criteria be met for specifying a suitable control group: the group should be a well-established group (e.g."}]}, {"question": "What is a gradient in deep learning", "positive_ctxs": [{"text": "Gradient descent is an optimization algorithm used to minimize some function by iteratively moving in the direction of steepest descent as defined by the negative of the gradient. In machine learning, we use gradient descent to update the parameters of our model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Even though batchnorm was originally introduced to alleviate gradient vanishing or explosion problems, a deep batchnorm network in fact suffers from gradient explosion at initialization time, no matter what it uses for nonlinearity. Thus the optimization landscape is very far from smooth for a randomly initialized, deep batchnorm network."}, {"text": "Neuroevolution is commonly used as part of the reinforcement learning paradigm, and it can be contrasted with conventional deep learning techniques that use gradient descent on a neural network with a fixed topology."}, {"text": "Long short-term memory (LSTM) is a deep learning system that avoids the vanishing gradient problem. LSTM is normally augmented by recurrent gates called \u201cforget gates\u201d. LSTM prevents backpropagated errors from vanishing or exploding."}, {"text": "Long short-term memory (LSTM) is a deep learning system that avoids the vanishing gradient problem. LSTM is normally augmented by recurrent gates called \u201cforget gates\u201d. LSTM prevents backpropagated errors from vanishing or exploding."}, {"text": "Long short-term memory (LSTM) is a deep learning system that avoids the vanishing gradient problem. LSTM is normally augmented by recurrent gates called \u201cforget gates\u201d. LSTM prevents backpropagated errors from vanishing or exploding."}, {"text": "Long short-term memory (LSTM) is a deep learning system that avoids the vanishing gradient problem. LSTM is normally augmented by recurrent gates called \u201cforget gates\u201d. LSTM prevents backpropagated errors from vanishing or exploding."}, {"text": "Long short-term memory (LSTM) is a deep learning system that avoids the vanishing gradient problem. LSTM is normally augmented by recurrent gates called \u201cforget gates\u201d. LSTM prevents backpropagated errors from vanishing or exploding."}]}, {"question": "Differences between prior distribution and prior predictive distribution", "positive_ctxs": [{"text": "The prior distribution is a distribution for the parameters whereas the prior predictive distribution is a distribution for the observations. The last line is based on the assumption that the upcoming observation is independent of X given \u03b8."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Both types of predictive distributions have the form of a compound probability distribution (as does the marginal likelihood). In fact, if the prior distribution is a conjugate prior, and hence the prior and posterior distributions come from the same family, it can easily be seen that both prior and posterior predictive distributions also come from the same family of compound distributions. The only difference is that the posterior predictive distribution uses the updated values of the hyperparameters (applying the Bayesian update rules given in the conjugate prior article), while the prior predictive distribution uses the values of the hyperparameters that appear in the prior distribution."}, {"text": "Hidden Markov models are generative models, in which the joint distribution of observations and hidden states, or equivalently both the prior distribution of hidden states (the transition probabilities) and conditional distribution of observations given states (the emission probabilities), is modeled. The above algorithms implicitly assume a uniform prior distribution over the transition probabilities. However, it is also possible to create hidden Markov models with other types of prior distributions."}, {"text": "In Bayesian probability theory, if the posterior distributions p(\u03b8 | x) are in the same probability distribution family as the prior probability distribution p(\u03b8), the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function p(x | \u03b8). For example, the Gaussian family is conjugate to itself (or self-conjugate) with respect to a Gaussian likelihood function: if the likelihood function is Gaussian, choosing a Gaussian prior over the mean will ensure that the posterior distribution is also Gaussian. This means that the Gaussian distribution is a conjugate prior for the likelihood that is also Gaussian."}, {"text": "In Bayesian inference, using a prior distribution Beta(\u03b1Prior,\u03b2Prior) prior to a binomial distribution is equivalent to adding (\u03b1Prior \u2212 1) pseudo-observations of \"success\" and (\u03b2Prior \u2212 1) pseudo-observations of \"failure\" to the actual number of successes and failures observed, then estimating the parameter p of the binomial distribution by the proportion of successes over both real- and pseudo-observations. A uniform prior Beta(1,1) does not add (or subtract) any pseudo-observations since for Beta(1,1) it follows that (\u03b1Prior \u2212 1) = 0 and (\u03b2Prior \u2212 1) = 0. The Haldane prior Beta(0,0) subtracts one pseudo observation from each and Jeffreys prior Beta(1/2,1/2) subtracts 1/2 pseudo-observation of success and an equal number of failure."}, {"text": "As part of the Bayesian framework, the Gaussian process specifies the prior distribution that describes the prior beliefs about the properties of the function being modeled. These beliefs are updated after taking into account observational data by means of a likelihood function that relates the prior beliefs to the observations. Taken together, the prior and likelihood lead to an updated distribution called the posterior distribution that is customarily used for predicting test cases."}, {"text": "Returning to our example, if we pick the Gamma distribution as our prior distribution over the rate of the poisson distributions, then the posterior predictive is the negative binomial distribution as can be seen from the last column in the table below. The Gamma distribution is parameterized by two hyperparameters"}, {"text": "the Conditional Normalized Maximum Likelihood (CNML) predictive distribution, from information theoretic considerations.The accuracy of a predictive distribution may be measured using the distance or divergence between the true exponential distribution with rate parameter, \u03bb0, and the predictive distribution based on the sample x. The Kullback\u2013Leibler divergence is a commonly used, parameterisation free measure of the difference between two distributions. Letting \u0394(\u03bb0||p) denote the Kullback\u2013Leibler divergence between an exponential with rate parameter \u03bb0 and a predictive distribution p it can be shown that"}]}, {"question": "Is a decision tree a model", "positive_ctxs": [{"text": "Decision trees: Are popular among non-statisticians as they produce a model that is very easy to interpret. Each leaf node is presented as an if/then rule."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In computer science, a logistic model tree (LMT) is a classification model with an associated supervised training algorithm that combines logistic regression (LR) and decision tree learning.Logistic model trees are based on the earlier idea of a model tree: a decision tree that has linear regression models at its leaves to provide a piecewise linear regression model (where ordinary decision trees with constants at their leaves would produce a piecewise constant model). In the logistic variant, the LogitBoost algorithm is used to produce an LR model at every node in the tree; the node is then split using the C4.5 criterion. Each LogitBoost invocation is warm-started from its results in the parent node."}, {"text": "A decision stump is a machine learning model consisting of a one-level decision tree. That is, it is a decision tree with one internal node (the root) which is immediately connected to the terminal nodes (its leaves). A decision stump makes a prediction based on the value of just a single input feature."}, {"text": "A decision tree or a classification tree is a tree in which each internal (non-leaf) node is labeled with an input feature. The arcs coming from a node labeled with an input feature are labeled with each of the possible values of the target feature or the arc leads to a subordinate decision node on a different input feature. Each leaf of the tree is labeled with a class or a probability distribution over the classes, signifying that the data set has been classified by the tree into either a specific class, or into a particular probability distribution (which, if the decision tree is well-constructed, is skewed towards certain subsets of classes)."}, {"text": "A decision tree or a classification tree is a tree in which each internal (non-leaf) node is labeled with an input feature. The arcs coming from a node labeled with an input feature are labeled with each of the possible values of the target feature or the arc leads to a subordinate decision node on a different input feature. Each leaf of the tree is labeled with a class or a probability distribution over the classes, signifying that the data set has been classified by the tree into either a specific class, or into a particular probability distribution (which, if the decision tree is well-constructed, is skewed towards certain subsets of classes)."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}]}, {"question": "What is competitive learning algorithm in neural network", "positive_ctxs": [{"text": "Competitive learning is a form of unsupervised learning in artificial neural networks, in which nodes compete for the right to respond to a subset of the input data. Models and algorithms based on the principle of competitive learning include vector quantization and self-organizing maps (Kohonen maps)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "An artificial neural network's learning rule or learning process is a method, mathematical logic or algorithm which improves the network's performance and/or training time. Usually, this rule is applied repeatedly over the network. It is done by updating the weights and bias levels of a network when a network is simulated in a specific data environment."}, {"text": "Competitive learning is a form of unsupervised learning in artificial neural networks, in which nodes compete for the right to respond to a subset of the input data. A variant of Hebbian learning, competitive learning works by increasing the specialization of each node in the network. It is well suited to finding clusters within data."}, {"text": "AlphaGo and its successors use a Monte Carlo tree search algorithm to find its moves based on knowledge previously acquired by machine learning, specifically by an artificial neural network (a deep learning method) by extensive training, both from human and computer play. A neural network is trained to identify the best moves and the winning percentages of these moves. This neural network improves the strength of the tree search, resulting in stronger move selection in the next iteration."}, {"text": "One solution is to use an (adapted) artificial neural network as a function approximator. Function approximation may speed up learning in finite problems, due to the fact that the algorithm can generalize earlier experiences to previously unseen states."}, {"text": "A self-organizing map (SOM) or self-organizing feature map (SOFM) is a type of artificial neural network (ANN) that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map, and is therefore a method to do dimensionality reduction. Self-organizing maps differ from other artificial neural networks as they apply competitive learning as opposed to error-correction learning (such as backpropagation with gradient descent), and in the sense that they use a neighborhood function to preserve the topological properties of the input space."}, {"text": "The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks.Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:"}, {"text": "The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks.Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:"}]}, {"question": "Does Restricted Boltzmann Machine expect the data to be labeled for training", "positive_ctxs": [{"text": "Answer. True is the answer of Restricted Boltzmann Machine expect data to be labeled for Training as because there are two process for training one which is called as pre-training and training. In pre-training one don't need labeled data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The visible units of Restricted Boltzmann Machine can be multinomial, although the hidden units are Bernoulli. In this case, the logistic function for visible units is replaced by the softmax function"}, {"text": "\"A Beginner's Guide to Restricted Boltzmann Machines\". Archived from the original on February 11, 2017. Retrieved November 15, 2018.CS1 maint: bot: original URL status unknown (link)."}, {"text": "Unsupervised learning schemes for training spatio-temporal features have been introduced, based on Convolutional Gated Restricted Boltzmann Machines and Independent Subspace Analysis."}, {"text": "Unsupervised learning schemes for training spatio-temporal features have been introduced, based on Convolutional Gated Restricted Boltzmann Machines and Independent Subspace Analysis."}, {"text": "Unsupervised learning schemes for training spatio-temporal features have been introduced, based on Convolutional Gated Restricted Boltzmann Machines and Independent Subspace Analysis."}, {"text": "Unsupervised learning schemes for training spatio-temporal features have been introduced, based on Convolutional Gated Restricted Boltzmann Machines and Independent Subspace Analysis."}, {"text": "Unsupervised learning schemes for training spatio-temporal features have been introduced, based on Convolutional Gated Restricted Boltzmann Machines and Independent Subspace Analysis."}]}, {"question": "What are the importance of sampling in statistics", "positive_ctxs": [{"text": "Sampling is a statistical procedure that is concerned with the selection of the individual observation; it helps us to make statistical inferences about the population. In sampling, we assume that samples are drawn from the population and sample means and population means are equal."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Hence, the basic methodology in importance sampling is to choose a distribution which \"encourages\" the important values. This use of \"biased\" distributions will result in a biased estimator if it is applied directly in the simulation. However, the simulation outputs are weighted to correct for the use of the biased distribution, and this ensures that the new importance sampling estimator is unbiased."}, {"text": "Sequential importance sampling (SIS) is a sequential (i.e., recursive) version of importance sampling. As in importance sampling, the expectation of a function f can be approximated as a weighted average"}, {"text": "possibly infinite memory (adaptive equalizers)In principle, the importance sampling ideas remain the same in these situations, but the design becomes much harder. A successful approach to combat this problem is essentially breaking down a simulation into several smaller, more sharply defined subproblems. Then importance sampling strategies are used to target each of the simpler subproblems."}, {"text": "therefore, a good probability change P(L) in importance sampling will redistribute the law of X so that its samples' frequencies are sorted directly according to their weights in E[X;P]. Hence the name \"importance sampling.\""}, {"text": "In statistics, importance sampling is a general technique for estimating properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. It is related to umbrella sampling in computational physics. Depending on the application, the term may refer to the process of sampling from this alternative distribution, the process of inference, or both."}, {"text": "The fundamental issue in implementing importance sampling simulation is the choice of the biased distribution which encourages the important regions of the input variables. Choosing or designing a good biased distribution is the \"art\" of importance sampling. The rewards for a good distribution can be huge run-time savings; the penalty for a bad distribution can be longer run times than for a general Monte Carlo simulation without importance sampling."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}]}, {"question": "How is AI used in engineering", "positive_ctxs": [{"text": "AI programs can provide automation for low-value tasks freeing up engineers to perform higher-value tasks. By using machine learning to discover patterns in the data, machines will be incredibly important to help with engineering judgment."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Whether a message is pipelined is an engineering tradeoff. How would an external observer know whether the processing of a message by an actor has been pipelined? There is no ambiguity in the definition of an actor created by the possibility of pipelining."}, {"text": "Pairwise comparison generally is any process of comparing entities in pairs to judge which of each entity is preferred, or has a greater amount of some quantitative property, or whether or not the two entities are identical. The method of pairwise comparison is used in the scientific study of preferences, attitudes, voting systems, social choice, public choice, requirements engineering and multiagent AI systems. In psychology literature, it is often referred to as paired comparison."}, {"text": "The CV or RSD is widely used in analytical chemistry to express the precision and repeatability of an assay. It is also commonly used in fields such as engineering or physics when doing quality assurance studies and ANOVA gauge R&R. In addition, CV is utilized by economists and investors in economic models."}, {"text": "Many design problems can also be expressed as optimization programs. This application is called design optimization. One subset is the engineering optimization, and another recent and growing subset of this field is multidisciplinary design optimization, which, while useful in many problems, has in particular been applied to aerospace engineering problems."}, {"text": "In video games, artificial intelligence is routinely used to generate dynamic purposeful behavior in non-player characters (NPCs). In addition, well-understood AI techniques are routinely used for pathfinding. Some researchers consider NPC AI in games to be a \"solved problem\" for most production tasks."}, {"text": "In video games, artificial intelligence is routinely used to generate dynamic purposeful behavior in non-player characters (NPCs). In addition, well-understood AI techniques are routinely used for pathfinding. Some researchers consider NPC AI in games to be a \"solved problem\" for most production tasks."}, {"text": "Gini coefficient is widely used in fields as diverse as sociology, economics, health science, ecology, engineering and agriculture. For example, in social sciences and economics, in addition to income Gini coefficients, scholars have published education Gini coefficients and opportunity Gini coefficients."}]}, {"question": "Is bootstrapping the same as bagging", "positive_ctxs": [{"text": "Bootstrap aggregating, also called bagging (from bootstrap aggregating), is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression. It also reduces variance and helps to avoid overfitting."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Bootstrap aggregating, often abbreviated as bagging, involves having each model in the ensemble vote with equal weight. In order to promote model variance, bagging trains each model in the ensemble using a randomly drawn subset of the training set. As an example, the random forest algorithm combines random decision trees with bagging to achieve very high classification accuracy.In bagging the samples are generated in such a way that the samples are different from each other however replacement is allowed."}, {"text": "Bootstrap aggregating, often abbreviated as bagging, involves having each model in the ensemble vote with equal weight. In order to promote model variance, bagging trains each model in the ensemble using a randomly drawn subset of the training set. As an example, the random forest algorithm combines random decision trees with bagging to achieve very high classification accuracy.In bagging the samples are generated in such a way that the samples are different from each other however replacement is allowed."}, {"text": "Bootstrap aggregating, often abbreviated as bagging, involves having each model in the ensemble vote with equal weight. In order to promote model variance, bagging trains each model in the ensemble using a randomly drawn subset of the training set. As an example, the random forest algorithm combines random decision trees with bagging to achieve very high classification accuracy.In bagging the samples are generated in such a way that the samples are different from each other however replacement is allowed."}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "Bootstrap Aggregating was proposed by Leo Breiman who also coined the abbreviated term \"Bagging\" (Bootstrap aggregating). Breiman developed the concept of bagging in 1994 to improve classification by combining classifications of randomly generated training sets. He argued, \u201cIf perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy.\u201d"}, {"text": "Bootstrap Aggregating was proposed by Leo Breiman who also coined the abbreviated term \"Bagging\" (Bootstrap aggregating). Breiman developed the concept of bagging in 1994 to improve classification by combining classifications of randomly generated training sets. He argued, \u201cIf perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy.\u201d"}]}, {"question": "What is NLP used for", "positive_ctxs": [{"text": "Natural language processing helps computers communicate with humans in their own language and scales other language-related tasks. For example, NLP makes it possible for computers to read text, hear speech, interpret it, measure sentiment and determine which parts are important."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "What is the underlying framework used to represent knowledge? Semantic networks were one of the first knowledge representation primitives. Also, data structures and algorithms for general fast search."}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "When used metaphorically (\u201dTomorrow is a big day\u201d), the author\u2019s intent to imply \u201dimportance\u201d. The intent behind other usages, like in \u201dShe is a big person\u201d will remain somewhat ambiguous to a person and a cognitive NLP algorithm alike without additional information."}, {"text": "When used metaphorically (\u201dTomorrow is a big day\u201d), the author\u2019s intent to imply \u201dimportance\u201d. The intent behind other usages, like in \u201dShe is a big person\u201d will remain somewhat ambiguous to a person and a cognitive NLP algorithm alike without additional information."}]}, {"question": "What is regression and classification", "positive_ctxs": [{"text": "Fundamentally, classification is about predicting a label and regression is about predicting a quantity. That classification is the problem of predicting a discrete class label output for an example. That regression is the problem of predicting a continuous quantity output for an example."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is learned from the plots is different from what is illustrated by the regression model, even though the experiment was not designed to investigate any of these other trends. The patterns found by exploring the data suggest hypotheses about tipping that may not have been anticipated in advance, and which could lead to interesting follow-up experiments where the hypotheses are formally stated and tested by collecting new data."}, {"text": "The softmax function is used in various multiclass classification methods, such as multinomial logistic regression (also known as softmax regression) [1], multiclass linear discriminant analysis, naive Bayes classifiers, and artificial neural networks. Specifically, in multinomial logistic regression and linear discriminant analysis, the input to the function is the result of K distinct linear functions, and the predicted probability for the j'th class given a sample vector x and a weighting vector w is:"}, {"text": "The softmax function is used in various multiclass classification methods, such as multinomial logistic regression (also known as softmax regression) [1], multiclass linear discriminant analysis, naive Bayes classifiers, and artificial neural networks. Specifically, in multinomial logistic regression and linear discriminant analysis, the input to the function is the result of K distinct linear functions, and the predicted probability for the j'th class given a sample vector x and a weighting vector w is:"}, {"text": "What is the sample size. How many units must be collected for the experiment to be generalisable and have enough power?"}, {"text": "Ronald J. Brachman; What IS-A is and isn't. An Analysis of Taxonomic Links in Semantic Networks; IEEE Computer, 16 (10); October 1983"}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}]}, {"question": "What are the limitations of Mohr's method", "positive_ctxs": [{"text": "A) (ii) Disadvantages of Mohr Method \uf0a7 Mohr's method is suitable only for titration of chloride, bromide and cyanide alone. \uf0a7 Errors can be introduced due to the need of excess titrant before the endpoint colour is visible."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "Lasso variants have been created in order to remedy limitations of the original technique and to make the method more useful for particular problems. Almost all of these focus on respecting or exploiting dependencies among the covariates."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Moreover, as is often the case, these limitations are necessary because of interactions between free and bound variables that occur during syntactic manipulations of the formulas involved in the inference rule."}, {"text": "There are a number of limitations and criticisms of evidence-based medicine. Two widely cited categorization schemes for the various published critiques of EBM include the three-fold division of Straus and McAlister (\"limitations universal to the practice of medicine, limitations unique to evidence-based medicine and misperceptions of evidence-based-medicine\") and the five-point categorization of Cohen, Stavri and Hersh (EBM is a poor philosophic basis for medicine, defines evidence too narrowly, is not evidence-based, is limited in usefulness when applied to individual patients, or reduces the autonomy of the doctor/patient relationship).In no particular order, some published objections include:"}, {"text": "Suppose there are n people at a party, each of whom brought an umbrella. At the end of the party everyone picks an umbrella out of the stack of umbrellas and leaves. What is the probability that no one left with his/her own umbrella?"}, {"text": "Although first-order logic is sufficient for formalizing much of mathematics, and is commonly used in computer science and other fields, it has certain limitations. These include limitations on its expressiveness and limitations of the fragments of natural languages that it can describe."}]}, {"question": "What are the data mining tools", "positive_ctxs": [{"text": "This article lists out 10 comprehensive data mining tools widely used in the big data industry.Rapid Miner. Oracle Data Mining. IBM SPSS Modeler. KNIME. Python. Orange. Kaggle. Rattle.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Not all patterns found by data mining algorithms are necessarily valid. It is common for data mining algorithms to find patterns in the training set which are not present in the general data set. To overcome this, the evaluation uses a test set of data on which the data mining algorithm was not trained."}, {"text": "Many FCA software applications are available today. The main purpose of these tools varies from formal context creation to formal concept mining and generating the concepts lattice of a given formal context and the corresponding implications and association rules. Most of these tools are academic open-source applications, such as:"}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}]}, {"question": "How do you find the degrees of freedom for a t test", "positive_ctxs": [{"text": "We can compute the p-value corresponding to the absolute value of the t-test statistics (|t|) for the degrees of freedom (df): df=n\u22121. If the p-value is inferior or equal to 0.05, we can conclude that the difference between the two paired samples are significantly different."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Experimental designs with undisclosed degrees of freedom are a problem. This can lead to conscious or unconscious \"p-hacking\": trying multiple things until you get the desired result. It typically involves the manipulation \u2013 perhaps unconsciously \u2013 of the process of statistical analysis and the degrees of freedom until they return a figure below the p<.05 level of statistical significance."}, {"text": "degrees of freedom is the sampling distribution of the t-value when the samples consist of independent identically distributed observations from a normally distributed population. Thus for inference purposes t is a useful \"pivotal quantity\" in the case when the mean and variance"}, {"text": "In many scientific fields, the degrees of freedom of a system is the number of parameters of the system that may vary independently. For example, a point in the plane has two degrees of freedom for translation: its two coordinates; a non-infinitesimal object on the plane might have additional degrees of freedoms related to its orientation."}, {"text": "When computing a t-test, it is important to keep in mind the degrees of freedom, which will depend on the level of the predictor (e.g., level 1 predictor or level 2 predictor). For a level 1 predictor, the degrees of freedom are based on the number of level 1 predictors, the number of groups and the number of individual observations. For a level 2 predictor, the degrees of freedom are based on the number of level 2 predictors and the number of groups."}, {"text": "Derive the distribution of the test statistic under the null hypothesis from the assumptions. In standard cases this will be a well-known result. For example, the test statistic might follow a Student's t distribution with known degrees of freedom, or a normal distribution with known mean and variance."}, {"text": "Derive the distribution of the test statistic under the null hypothesis from the assumptions. In standard cases this will be a well-known result. For example, the test statistic might follow a Student's t distribution with known degrees of freedom, or a normal distribution with known mean and variance."}, {"text": "Derive the distribution of the test statistic under the null hypothesis from the assumptions. In standard cases this will be a well-known result. For example, the test statistic might follow a Student's t distribution with known degrees of freedom, or a normal distribution with known mean and variance."}]}, {"question": "What does batch normalization do", "positive_ctxs": [{"text": "Batch normalization (also known as batch norm) is a method used to make artificial neural networks faster and more stable through normalization of the input layer by re-centering and re-scaling. Others sustain that batch normalization achieves length-direction decoupling, and thereby accelerates neural networks."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Recently, some scholars have argued that batch normalization does not reduce internal covariate shift, but rather smooths the objective function, which in turn improves the performance. However, at initialization, batch normalization in fact induces severe gradient explosion in deep networks, which is only alleviated by skip connections in residual networks. Others sustain that batch normalization achieves length-direction decoupling, and thereby accelerates neural networks.After batch norm, many other in-layer normalization methods have been introduced, such as instance normalization, layer normalization, group normalization."}, {"text": "The correlation between the gradients are computed for four models: a standard VGG network, a VGG network with batch normalization layers, a 25-layer deep linear network (DLN) trained with full-batch gradient descent, and a DLN network with batch normalization layers. Interestingly, it is shown that the standard VGG and DLN models both have higher correlations of gradients compared with their counterparts, indicating that the additional batch normalization layers are not reducing internal covariate shift."}, {"text": "Besides analyzing this correlation experimentally, theoretical analysis is also provided for verification that batch normalization could result in a smoother landscape. Consider two identical networks, one contains batch normalization layers and the other doesn't, the behaviors of these two networks are then compared. Denote the loss functions as"}, {"text": "The correlation between batch normalization and internal covariate shift is widely accepted but was not supported by experimental results. Scholars recently show with experiments that the hypothesized relationship is not an accurate one. Rather, the enhanced accuracy with the batch normalization layer seems to be independent of internal covariate shift."}, {"text": "To understand if there is any correlation between reducing covariate shift and improving performance, an experiment is performed to elucidate the relationship. Specifically, three models are trained and compared: a standard VGG network without batch normalization, a VGG network with batch normalization layers, and a VGG network with batch normalization layers and random noise. In the third model, the noise has non-zero mean and non-unit variance, and is generated at random for each layer."}, {"text": "Besides reducing internal covariate shift, batch normalization is believed to introduce many other benefits. With this additional operation, the network can use higher learning rate without vanishing or exploding gradients. Furthermore, batch normalization seems to have a regularizing effect such that the network improves its generalization properties, and it is thus unnecessary to use dropout to mitigate overfitting."}, {"text": "Moreover, the batch normalized models are compared with models with different normalization techniques. Specifically, these normalization methods work by first fixing the first order moment of activation, and then normalizing it by the average of the"}]}, {"question": "What does it mean robust in statistics", "positive_ctxs": [{"text": "Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal. Robust statistical methods have been developed for many common problems, such as estimating location, scale, and regression parameters."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The sample mean and sample covariance are not robust statistics, meaning that they are sensitive to outliers. As robustness is often a desired trait, particularly in real-world applications, robust alternatives may prove desirable, notably quantile-based statistics such as the sample median for location, and interquartile range (IQR) for dispersion. Other alternatives include trimming and Winsorising, as in the trimmed mean and the Winsorized mean."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "is in no sense more or less likely than other points, as info-gap does not use probability. Info-gap, by not using probability distributions, is robust in that it is not sensitive to assumptions on probabilities of outcomes. However, the model of uncertainty does include a notion of \"closer\" and \"more distant\" outcomes, and thus includes some assumptions, and is not as robust as simply considering all possible outcomes, as in minimax."}]}, {"question": "How many hash functions are required in a minhash algorithm", "positive_ctxs": [{"text": "So, for 10% error, you need 100 hash functions. For 1% error, you need 10,000 hash functions. Yick. That's friggin expensive, and if that's all there were to MinHash, I'd simply go with the O(n log(n)) algorithm."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In this example, there is a 50% probability that the hash collision cancels out. Multiple hash functions can be used to further reduce the risk of collisions.Furthermore, if \u03c6 is the transformation implemented by a hashing trick with a sign hash \u03be (i.e. \u03c6(x) is the feature vector produced for a sample x), then inner products in the hashed space are unbiased:"}, {"text": "It has been suggested that a second, single-bit output hash function \u03be be used to determine the sign of the update value, to counter the effect of hash collisions. If such a hash function is used, the algorithm becomes"}, {"text": "An entry in a hash table is created predicting the model location, orientation, and scale from the match hypothesis. The hash table is searched to identify all clusters of at least 3 entries in a bin, and the bins are sorted into decreasing order of size."}, {"text": "The Content-Defined Chunking (CDC) algorithm needs to compute the hash value of a data stream byte by byte and split the data stream into chunks when the hash value meets a predefined value. However, comparing a string byte-by-byte will introduce the heavy computation overhead. FastCDC proposes a new and efficient Content-Defined Chunking approach."}, {"text": "There are many other areas of application for sequence learning. How humans learn sequential procedures has been a long-standing research problem in cognitive science and currently is a major topic in neuroscience. Research work has been going on in several disciplines, including artificial intelligence, neural networks, and engineering."}, {"text": "Information retrieval benefits particularly from dimensionality reduction in that search can become more efficient in certain kinds of low dimensional spaces. Autoencoders were indeed applied to semantic hashing, proposed by Salakhutdinov and Hinton in 2007. By training the algorithm to produce a low-dimensional binary code, all database entries could be stored in a hash table mapping binary code vectors to entries."}, {"text": "A common alternative to using dictionaries is the hashing trick, where words are mapped directly to indices with a hashing function. Thus, no memory is required to store a dictionary. Hash collisions are typically dealt via freed-up memory to increase the number of hash buckets."}]}, {"question": "Are genetic algorithms machine learning", "positive_ctxs": [{"text": "Genetic algorithms are important in machine learning for three reasons. First, they act on discrete spaces, where gradient-based methods cannot be used. They can be used to search rule sets, neural network architectures, cellular automata computers, and so forth."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms."}, {"text": "A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms."}, {"text": "A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms."}, {"text": "A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms."}, {"text": "A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms."}, {"text": "A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms."}, {"text": "A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms."}]}, {"question": "What is an object detection model", "positive_ctxs": [{"text": "Given an image or a video stream, an object detection model can identify which of a known set of objects might be present and provide information about their positions within the image."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Part-based models refers to a broad class of detection algorithms used on images, in which various parts of the image are used separately in order to determine if and where an object of interest exists. Amongst these methods a very popular one is the constellation model which refers to those schemes which seek to detect a small number of features and their relative positions to then determine whether or not the object of interest is present."}, {"text": "In this journal, authors proposed a new approach to use SIFT descriptors for multiple object detection purposes. The proposed multiple object detection approach is tested on aerial and satellite images.SIFT features can essentially be applied to any task that requires identification of matching locations between images. Work has been done on applications such as recognition of particular object categories in 2D images, 3D reconstruction,"}, {"text": "The actor model adopts the philosophy that everything is an actor. This is similar to the everything is an object philosophy used by some object-oriented programming languages."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Multiple trace theory is a memory consolidation model advanced as an alternative model to strength theory. It posits that each time some information is presented to a person, it is neurally encoded in a unique memory trace composed of a combination of its attributes. Further support for this theory came in the 1960s from empirical findings that people could remember specific attributes about an object without remembering the object itself."}, {"text": "The inverse is \"If an object is not red, then it does not have color.\" An object which is blue is not red, and still has color. Therefore, in this case the inverse is false."}]}, {"question": "What are the applications of reinforcement learning", "positive_ctxs": [{"text": "Here are applications of Reinforcement Learning:Robotics for industrial automation.Business strategy planning.Machine learning and data processing.It helps you to create training systems that provide custom instruction and materials according to the requirement of students.Aircraft control and robot motion control."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Along with rising interest in neural networks beginning in the mid 1980s, interest grew in deep reinforcement learning where a neural network is used to represent policies or value functions. As in such a system, the entire decision making process from sensors to motors in a robot or agent involves a single layered neural network, it is sometimes called end-to-end reinforcement learning. One of the first successful applications of reinforcement learning with neural networks was TD-Gammon, a computer program developed in 1992 for playing backgammon."}, {"text": "Up until the 2000s nearly all learning classifier system methods were developed with reinforcement learning problems in mind. As a result, the term \u2018learning classifier system\u2019 was commonly defined as the combination of \u2018trial-and-error\u2019 reinforcement learning with the global search of a genetic algorithm. Interest in supervised learning applications, and even unsupervised learning have since broadened the use and definition of this term."}, {"text": "Two elements make reinforcement learning powerful: the use of samples to optimize performance and the use of function approximation to deal with large environments. Thanks to these two key components, reinforcement learning can be used in large environments in the following situations:"}, {"text": "Two elements make reinforcement learning powerful: the use of samples to optimize performance and the use of function approximation to deal with large environments. Thanks to these two key components, reinforcement learning can be used in large environments in the following situations:"}, {"text": "The recommendation problem can be seen as a special instance of a reinforcement learning problem whereby the user is the environment upon which the agent, the recommendation system acts upon in order to receive a reward, for instance, a click or engagement by the user. One aspect of reinforcement learning that is of particular use in the area of recommender systems is the fact that the models or policies can be learned by providing a reward to the recommendation agent. This is in contrast to traditional learning techniques which rely on supervised learning approaches that are less flexible, reinforcement learning recommendation techniques allow to potentially train models that can be optimized directly on metrics of engagement, and user interest."}, {"text": "Many applications of reinforcement learning do not involve just a single agent, but rather a collection of agents that learn together and co-adapt. These agents may be competitive, as in many games, or cooperative as in many real-world multi-agent systems. Multi-agent learning studies the problems introduced in this setting."}, {"text": "Associative reinforcement learning tasks combine facets of stochastic learning automata tasks and supervised learning pattern classification tasks. In associative reinforcement learning tasks, the learning system interacts in a closed loop with its environment."}]}, {"question": "What is meant by stratified sampling", "positive_ctxs": [{"text": "Definition: Stratified sampling is a type of sampling method in which the total population is divided into smaller groups or strata to complete the sampling process. Stratified sampling is used when the researcher wants to understand the existing relationship between two groups."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "A common motivation of cluster sampling is to reduce costs by increasing sampling efficiency. This contrasts with stratified sampling where the motivation is to increase precision."}, {"text": "A common motivation of cluster sampling is to reduce costs by increasing sampling efficiency. This contrasts with stratified sampling where the motivation is to increase precision."}, {"text": "OversamplingChoice-based sampling is one of the stratified sampling strategies. In choice-based sampling, the data are stratified on the target and a sample is taken from each stratum so that the rare target class will be more represented in the sample. The model is then built on this biased sample."}, {"text": "OversamplingChoice-based sampling is one of the stratified sampling strategies. In choice-based sampling, the data are stratified on the target and a sample is taken from each stratum so that the rare target class will be more represented in the sample. The model is then built on this biased sample."}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}]}, {"question": "What is backtracking algorithm", "positive_ctxs": [{"text": "Backtracking is a general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons a candidate (\"backtracks\") as soon as it determines that the candidate cannot possibly be completed to a"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Eight queens problem is usually solved with a backtracking algorithm. However, a Las Vegas algorithm can be applied; in fact, it is more efficient than backtracking."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "For each value, the consistency of the partial assignment with the constraints is checked; in case of consistency, a recursive call is performed. When all values have been tried, the algorithm backtracks. In this basic backtracking algorithm, consistency is defined as the satisfaction of all constraints whose variables are all assigned."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}]}, {"question": "Which is an example of adaptive instruction", "positive_ctxs": [{"text": "Micro-level adaptive instruction: The main feature of this approach is to utilize on-task rather than pre-task measurement to diagnose the students' learning behaviors and performance so as to adapt the instruction at the micro-level. Typical examples include one-on-one tutoring and intelligent tutoring systems."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Each stage in the pipeline corresponds to a different action the processor performs on that instruction in that stage; a processor with an N-stage pipeline can have up to N different instructions at different stages of completion and thus can issue one instruction per clock cycle (IPC = 1). These processors are known as scalar processors. The canonical example of a pipelined processor is a RISC processor, with five stages: instruction fetch (IF), instruction decode (ID), execute (EX), memory access (MEM), and register write back (WB)."}, {"text": "Which treatment is considered better is determined by an inequality between two ratios (successes/total). The reversal of the inequality between the ratios, which creates Simpson's paradox, happens because two effects occur together:"}, {"text": "It assumes the instruction vector_sum works. Although this is what happens with instruction intrinsics, much information is actually not taken into account here such as the number of vector components and their data format. This is done for clarity."}, {"text": "A vector processor is a CPU or computer system that can execute the same instruction on large sets of data. Vector processors have high-level operations that work on linear arrays of numbers or vectors. An example vector operation is A = B \u00d7 C, where A, B, and C are each 64-element vectors of 64-bit floating-point numbers."}, {"text": "A promising line in document summarization is adaptive document/text summarization. The idea of adaptive summarization involves preliminary recognition of document/text genre and subsequent application of summarization algorithms optimized for this genre. First summarizes that perform adaptive summarization have been created."}, {"text": "will generally be small but not necessarily zero. Which of these regimes is more relevant depends on the specific data set at hand."}, {"text": "In adaptive fuzzy fitness granulation, an adaptive pool of solutions, represented by fuzzy granules, with an exactly computed fitness function result is maintained. If a new individual is sufficiently similar to an existing known fuzzy granule, then that granule's fitness is used instead as an estimate. Otherwise, that individual is added to the pool as a new fuzzy granule."}]}, {"question": "What is distributional information", "positive_ctxs": [{"text": "The distributional hypothesis in linguistics is derived from the semantic theory of language usage, i.e. words that are used and occur in the same contexts tend to purport similar meanings. The underlying idea that \"a word is characterized by the company it keeps\" was popularized by Firth in the 1950s."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Distributional semantics favor the use of linear algebra as computational tool and representational framework. The basic approach is to collect distributional information in high-dimensional vectors, and to define distributional/semantic similarity in terms of vector similarity. Different kinds of similarities can be extracted depending on which type of distributional information is used to collect the vectors: topical similarities can be extracted by populating the vectors with information on which text regions the linguistic items occur in; paradigmatic similarities can be extracted by populating the vectors with information on which other linguistic items the items co-occur with."}, {"text": "Copula Variational Bayes inference via information geometry (pdf) by Tran, V.H. This paper is primarily written for students. Via Bregman divergence, the paper shows that Variational Bayes is simply a generalized Pythagorean projection of true model onto an arbitrarily correlated (copula) distributional space, of which the independent space is merely a special case."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "In linguistics, word embeddings were discussed in the research area of distributional semantics. It aims to quantify and categorize semantic similarities between linguistic items based on their distributional properties in large samples of language data. The underlying idea that \"a word is characterized by the company it keeps\" was popularized by Firth.The notion of a semantic space with lexical items (words or multi-word terms) represented as vectors or embeddings is based on the computational challenges of capturing distributional characteristics and using them for practical application to measure similarity between words, phrases, or entire documents."}]}, {"question": "How do you calculate false positives and negatives", "positive_ctxs": [{"text": "The false positive rate is calculated as FP/FP+TN, where FP is the number of false positives and TN is the number of true negatives (FP+TN being the total number of negatives). It's the probability that a false alarm will be raised: that a positive result will be given when the true value is negative."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Daily chart \u2013 Unlikely results - Why most published scientific research is probably false \u2013 Illustration of False positives and false negatives in The Economist appearing in the article Problems with scientific research How science goes wrong Scientific research has changed the world. Now it needs to change itself (19 October 2013)"}, {"text": "Daily chart \u2013 Unlikely results - Why most published scientific research is probably false \u2013 Illustration of False positives and false negatives in The Economist appearing in the article Problems with scientific research How science goes wrong Scientific research has changed the world. Now it needs to change itself (19 October 2013)"}, {"text": "Daily chart \u2013 Unlikely results - Why most published scientific research is probably false \u2013 Illustration of False positives and false negatives in The Economist appearing in the article Problems with scientific research How science goes wrong Scientific research has changed the world. Now it needs to change itself (19 October 2013)"}, {"text": "Daily chart \u2013 Unlikely results - Why most published scientific research is probably false \u2013 Illustration of False positives and false negatives in The Economist appearing in the article Problems with scientific research How science goes wrong Scientific research has changed the world. Now it needs to change itself (19 October 2013)"}, {"text": "Thus, if a test's sensitivity is 98% and its specificity is 92%, its rate of false negatives is 2% and its rate of false positives is 8%."}, {"text": "Thus, if a test's sensitivity is 98% and its specificity is 92%, its rate of false negatives is 2% and its rate of false positives is 8%."}, {"text": "Thus, if a test's sensitivity is 98% and its specificity is 92%, its rate of false negatives is 2% and its rate of false positives is 8%."}]}, {"question": "What is the datatype of the output for the function input ()", "positive_ctxs": [{"text": "The input() function accepts an optional string argument called prompt and returns a string. Note that the input() function always returns a string even if you entered a number. To convert it to an integer you can use int() or eval() functions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In particular, sometimes it is desirable to exactly preserve the spatial size of the input volume.The spatial size of the output volume is a function of the input volume size"}, {"text": "In particular, sometimes it is desirable to exactly preserve the spatial size of the input volume.The spatial size of the output volume is a function of the input volume size"}, {"text": "In particular, sometimes it is desirable to exactly preserve the spatial size of the input volume.The spatial size of the output volume is a function of the input volume size"}, {"text": "In particular, sometimes it is desirable to exactly preserve the spatial size of the input volume.The spatial size of the output volume is a function of the input volume size"}, {"text": "In particular, sometimes it is desirable to exactly preserve the spatial size of the input volume.The spatial size of the output volume is a function of the input volume size"}, {"text": "In particular, sometimes it is desirable to exactly preserve the spatial size of the input volume.The spatial size of the output volume is a function of the input volume size"}, {"text": "In particular, sometimes it is desirable to exactly preserve the spatial size of the input volume.The spatial size of the output volume is a function of the input volume size"}]}, {"question": "What are some methods of time series regression analysis", "positive_ctxs": [{"text": "Time series regression is a statistical method for predicting a future response based on the response history (known as autoregressive dynamics) and the transfer of dynamics from relevant predictors. Time series regression is commonly used for modeling and forecasting of economic, financial, and biological systems."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}, {"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}, {"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}, {"text": "Interrupted time series analysis is used to detect changes in the evolution of a time series from before to after some intervention which may affect the underlying variable."}, {"text": "Interrupted time series analysis is used to detect changes in the evolution of a time series from before to after some intervention which may affect the underlying variable."}, {"text": "Interrupted time series analysis is used to detect changes in the evolution of a time series from before to after some intervention which may affect the underlying variable."}, {"text": "Alternatively, in the subset of regression analysis known as time series analysis there are often no explanatory variables other than the past values of the variable being modeled (the dependent variable). In this case the noise process is often modeled as a moving average process, in which the current value of the dependent variable depends on current and past values of a sequential white noise process."}]}, {"question": "What is Bayesian Hyperparameter optimization", "positive_ctxs": [{"text": "Bayesian hyperparameter tuning allows us to do so by building a probabilistic model for the objective function we are trying to minimize/maximize in order to train our machine learning model. Examples of such objective functions are not scary - accuracy, root mean squared error and so on."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Bayesian optimization is a sequential design strategy for global optimization of black-box functions that does not assume any functional forms. It is usually employed to optimize expensive-to-evaluate functions."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Bayesian optimization is a global optimization method for noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization builds a probabilistic model of the function mapping from hyperparameter values to the objective evaluated on a validation set. By iteratively evaluating a promising hyperparameter configuration based on the current model, and then updating it, Bayesian optimization, aims to gather observations revealing as much information as possible about this function and, in particular, the location of the optimum."}, {"text": "Bayesian optimization is a global optimization method for noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization builds a probabilistic model of the function mapping from hyperparameter values to the objective evaluated on a validation set. By iteratively evaluating a promising hyperparameter configuration based on the current model, and then updating it, Bayesian optimization, aims to gather observations revealing as much information as possible about this function and, in particular, the location of the optimum."}, {"text": "Bayesian optimization is a global optimization method for noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization builds a probabilistic model of the function mapping from hyperparameter values to the objective evaluated on a validation set. By iteratively evaluating a promising hyperparameter configuration based on the current model, and then updating it, Bayesian optimization, aims to gather observations revealing as much information as possible about this function and, in particular, the location of the optimum."}, {"text": "Bayesian optimization is a global optimization method for noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization builds a probabilistic model of the function mapping from hyperparameter values to the objective evaluated on a validation set. By iteratively evaluating a promising hyperparameter configuration based on the current model, and then updating it, Bayesian optimization, aims to gather observations revealing as much information as possible about this function and, in particular, the location of the optimum."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}]}, {"question": "What is modality in machine learning", "positive_ctxs": [{"text": "Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Multimodal deep Boltzmann machines is successfully used in classification and missing data retrieval. The classification accuracy of multimodal deep Boltzmann machine outperforms support vector machines, latent Dirichlet allocation and deep belief network, when models are tested on data with both image-text modalities or with single modality. Multimodal deep Boltzmann machine is also able to predict the missing modality given the observed ones with reasonably good precision."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The multimodal learning model is also capable to fill missing modality given the observed ones. The multimodal learning model combines two deep Boltzmann machines each corresponds to one modality. An additional hidden layer is placed on top of the two Boltzmann Machines to give the joint representation."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "How kernel functions are called", "positive_ctxs": [{"text": "An operating system (OS) is a set of functions or programs that coordinate a user program's access to the computer's resources (i.e. memory and CPU). These functions are called the MicroStamp11's kernel functions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect."}, {"text": "How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect."}, {"text": "How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect."}, {"text": "How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect."}, {"text": "The regularization and kernel theory literature for vector-valued functions followed in the 2000s. While the Bayesian and regularization perspectives were developed independently, they are in fact closely related."}]}, {"question": "Can you use Anova if data is not normally distributed", "positive_ctxs": [{"text": "As regards the normality of group data, the one-way ANOVA can tolerate data that is non-normal (skewed or kurtotic distributions) with only a small effect on the Type I error rate. However, platykurtosis can have a profound effect when your group sizes are small."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If the population is not normally distributed, the sample mean is nonetheless approximately normally distributed if n is large and \u03c32/n < +\u221e. This is a consequence of the central limit theorem."}, {"text": "If the population is not normally distributed, the sample mean is nonetheless approximately normally distributed if n is large and \u03c32/n < +\u221e. This is a consequence of the central limit theorem."}, {"text": "If the population is not normally distributed, the sample mean is nonetheless approximately normally distributed if n is large and \u03c32/n < +\u221e. This is a consequence of the central limit theorem."}, {"text": "If the population is not normally distributed, the sample mean is nonetheless approximately normally distributed if n is large and \u03c32/n < +\u221e. This is a consequence of the central limit theorem."}, {"text": "If the population is not normally distributed, the sample mean is nonetheless approximately normally distributed if n is large and \u03c32/n < +\u221e. This is a consequence of the central limit theorem."}, {"text": "For example, if Z is a normally distributed random variable, then P(Z=x) is 0 for any x, but P(Z\u2208R) = 1."}, {"text": ", which is not bounded. At each stage, the average will be normally distributed (as the average of a set of normally distributed variables). The variance of the sum is equal to the sum of the variances, which is asymptotic to"}]}, {"question": "What is sample and sample size", "positive_ctxs": [{"text": "Sample size refers to the number of participants or observations included in a study. This number is usually represented by n. The size of a sample influences two statistical properties: 1) the precision of our estimates and 2) the power of the study to draw conclusions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Reporting sample size analysis is generally required in psychology. \"Provide information on sample size and the process that led to sample size decisions.\" The analysis, which is written in the experimental protocol before the experiment is conducted, is examined in grant applications and administrative review boards."}, {"text": "Reporting sample size analysis is generally required in psychology. \"Provide information on sample size and the process that led to sample size decisions.\" The analysis, which is written in the experimental protocol before the experiment is conducted, is examined in grant applications and administrative review boards."}, {"text": "Reporting sample size analysis is generally required in psychology. \"Provide information on sample size and the process that led to sample size decisions.\" The analysis, which is written in the experimental protocol before the experiment is conducted, is examined in grant applications and administrative review boards."}, {"text": "Reporting sample size analysis is generally required in psychology. \"Provide information on sample size and the process that led to sample size decisions.\" The analysis, which is written in the experimental protocol before the experiment is conducted, is examined in grant applications and administrative review boards."}, {"text": "Difference between Z-test and t-test: Z-test is used when sample size is large (n>50), or the population variance is known. t-test is used when sample size is small (n<50) and population variance is unknown."}, {"text": "The drawback is that the central limit theorem is applicable when the sample size is sufficiently large. Therefore, it is less and less applicable with the sample involved in modern inference instances. The fault is not in the sample size on its own part."}, {"text": "The reliability of the sample mean is estimated using the standard error, which in turn is calculated using the variance of the sample. If the sample is random, the standard error falls with the size of the sample and the sample mean's distribution approaches the normal distribution as the sample size increases."}]}, {"question": "How does Anova work in statistics", "positive_ctxs": [{"text": "Analysis of variance (ANOVA) is a statistical technique that is used to check if the means of two or more groups are significantly different from each other. ANOVA checks the impact of one or more factors by comparing the means of different samples. Another measure to compare the samples is called a t-test."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}]}, {"question": "What is model capacity in machine learning", "positive_ctxs": [{"text": "\u2022 Model capacity is ability to fit variety of functions. \u2013 Model with Low capacity struggles to fit training set. \u2013 A High capacity model can overfit by memorizing. properties of training set not useful on test set. \u2022 When model has higher capacity, it overfits."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory. GPT-3's full version has a capacity of 175 billion machine learning parameters."}, {"text": "At the core of HTM are learning algorithms that can store, learn, infer, and recall high-order sequences. Unlike most other machine learning methods, HTM continuously learns (in an unsupervised process) time-based patterns in unlabeled data. HTM is robust to noise, and has high capacity (it can learn multiple patterns simultaneously)."}, {"text": "This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems."}, {"text": "This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems."}, {"text": "This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems."}, {"text": "This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems."}, {"text": "This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems."}]}, {"question": "What is marginal and conditional distribution", "positive_ctxs": [{"text": "A marginal distribution is the percentages out of totals, and conditional distribution is the percentages out of some column. Conditional distribution, on the other hand, is the probability distribution of certain values in the table expressed as percentages out of sums (or local totals) of certain rows or columns."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "are categorical variables, a conditional probability table is typically used to represent the conditional probability. The conditional distribution contrasts with the marginal distribution of a random variable, which is its distribution without reference to the value of the other variable."}, {"text": "The marginal probability is the probability of a single event occurring, independent of other events. A conditional probability, on the other hand, is the probability that an event occurs given that another specific event has already occurred. This means that the calculation for one variable is dependent on another variable.The conditional distribution of a variable given another variable is the joint distribution of both variables divided by the marginal distribution of the other variable."}, {"text": "The marginal probability is the probability of a single event occurring, independent of other events. A conditional probability, on the other hand, is the probability that an event occurs given that another specific event has already occurred. This means that the calculation for one variable is dependent on another variable.The conditional distribution of a variable given another variable is the joint distribution of both variables divided by the marginal distribution of the other variable."}, {"text": "The marginal probability is the probability of a single event occurring, independent of other events. A conditional probability, on the other hand, is the probability that an event occurs given that another specific event has already occurred. This means that the calculation for one variable is dependent on another variable.The conditional distribution of a variable given another variable is the joint distribution of both variables divided by the marginal distribution of the other variable."}, {"text": "propose a p-value derived from the likelihood ratio test based on the conditional distribution of the odds ratio given the marginal success rate. This p-value is inferentially consistent with classical tests of normally distributed data as well as with likelihood ratios and support intervals based on this conditional likelihood function. It is also readily computable."}, {"text": "propose a p-value derived from the likelihood ratio test based on the conditional distribution of the odds ratio given the marginal success rate. This p-value is inferentially consistent with classical tests of normally distributed data as well as with likelihood ratios and support intervals based on this conditional likelihood function. It is also readily computable."}, {"text": "If more than one random variable is defined in a random experiment, it is important to distinguish between the joint probability distribution of X and Y and the probability distribution of each variable individually. The individual probability distribution of a random variable is referred to as its marginal probability distribution. In general, the marginal probability distribution of X can be determined from the joint probability distribution of X and other random variables."}]}, {"question": "What is the cutoff for loading factors using factor analysis", "positive_ctxs": [{"text": "communalities is calculated sum of square factor loadings. Generally, an item factor loading is recommended higher than 0.30 or 0.33 cut value. So if an item load only one factor its communality will be 0.30*0.30 = 0.09."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Canonical factor analysis seeks factors which have the highest canonical correlation with the observed variables. Canonical factor analysis is unaffected by arbitrary rescaling of the data."}, {"text": "Factor loadings: Communality is the square of the standardized outer loading of an item. Analogous to Pearson's r-squared, the squared factor loading is the percent of variance in that indicator variable explained by the factor. To get the percent of variance in all the variables accounted for by each factor, add the sum of the squared factor loadings for that factor (column) and divide by the number of variables."}, {"text": "Confirmatory factor analysis (CFA) is a more complex approach that tests the hypothesis that the items are associated with specific factors. CFA uses structural equation modeling to test a measurement model whereby loading on the factors allows for evaluation of relationships between observed variables and unobserved variables. Structural equation modeling approaches can accommodate measurement error, and are less restrictive than least-squares estimation."}, {"text": "Higher-order factor analysis is a statistical method consisting of repeating steps factor analysis \u2013 oblique rotation \u2013 factor analysis of rotated factors. Its merit is to enable the researcher to see the hierarchical structure of studied phenomena. To interpret the results, one proceeds either by post-multiplying the primary factor pattern matrix by the higher-order factor pattern matrices (Gorsuch, 1983) and perhaps applying a Varimax rotation to the result (Thompson, 1990) or by using a Schmid-Leiman solution (SLS, Schmid & Leiman, 1957, also known as Schmid-Leiman transformation) which attributes the variation from the primary factors to the second-order factors."}, {"text": ", the criteria for being factors and factor loadings still hold. Hence a set of factors and factor loadings is unique only up to an orthogonal transformation."}, {"text": "In the Q factor analysis technique the matrix is transposed and factors are created by grouping related people. For example, liberals, libertarians, conservatives, and socialists might form into separate groups."}, {"text": "Interpreting factor analysis is based on using a \"heuristic\", which is a solution that is \"convenient even if not absolutely true\". More than one interpretation can be made of the same data factored the same way, and factor analysis cannot identify causality."}]}, {"question": "How is image processing used in machine learning", "positive_ctxs": [{"text": "5. Image Processing Using Machine LearningFeature mapping using the scale-invariant feature transform (SIFT) algorithm.Image registration using the random sample consensus (RANSAC) algorithm.Image Classification using artificial neural networks.Image classification using convolutional neural networks (CNNs)Image Classification using machine learning.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Markov random fields find application in a variety of fields, ranging from computer graphics to computer vision, machine learning or computational biology. MRFs are used in image processing to generate textures as they can be used to generate flexible and stochastic image models. In image modelling, the task is to find a suitable intensity distribution of a given image, where suitability depends on the kind of task and MRFs are flexible enough to be used for image and texture synthesis, image compression and restoration, image segmentation, 3D image inference from 2D images, image registration, texture synthesis, super-resolution, stereo matching and information retrieval."}, {"text": "The MNIST database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems. The database is also widely used for training and testing in the field of machine learning. It was created by \"re-mixing\" the samples from NIST's original datasets."}, {"text": "The median filter is a non-linear digital filtering technique, often used to remove noise from an image or signal. Such noise reduction is a typical pre-processing step to improve the results of later processing (for example, edge detection on an image). Median filtering is very widely used in digital image processing because, under certain conditions, it preserves edges while removing noise (but see the discussion below), also having applications in signal processing."}, {"text": "The goals vary from noise removal to feature abstraction. Filtering image data is a standard process used in almost all image processing systems. Nonlinear filters are the most utilized forms of filter construction."}, {"text": "The Kuwahara filter is a non-linear smoothing filter used in image processing for adaptive noise reduction. Most filters that are used for image smoothing are linear low-pass filters that effectively reduce noise but also blur out the edges. However the Kuwahara filter is able to apply smoothing on the image while preserving the edges."}, {"text": "The discrete Laplace operator occurs in physics problems such as the Ising model and loop quantum gravity, as well as in the study of discrete dynamical systems. It is also used in numerical analysis as a stand-in for the continuous Laplace operator. Common applications include image processing, where it is known as the Laplace filter, and in machine learning for clustering and semi-supervised learning on neighborhood graphs."}, {"text": "Whether a message is pipelined is an engineering tradeoff. How would an external observer know whether the processing of a message by an actor has been pipelined? There is no ambiguity in the definition of an actor created by the possibility of pipelining."}]}, {"question": "How do you find the mode of a chi square distribution", "positive_ctxs": [{"text": "As the df increase, the chi square distribution approaches a normal distribution. The mean of a chi square distribution is its df. The mode is df - 2 and the median is approximately df - 0 ."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "A random variable that is a function of the random sample and of the unknown parameter, but whose probability distribution does not depend on the unknown parameter is called a pivotal quantity or pivot. Widely used pivots include the z-score, the chi square statistic and Student's t-value."}, {"text": "A random variable that is a function of the random sample and of the unknown parameter, but whose probability distribution does not depend on the unknown parameter is called a pivotal quantity or pivot. Widely used pivots include the z-score, the chi square statistic and Student's t-value."}]}, {"question": "Which algorithm falls under unsupervised learning", "positive_ctxs": [{"text": "Unsupervised learning is a type of machine learning algorithm used to draw inferences from datasets consisting of input data without labeled responses. The most common unsupervised learning method is cluster analysis, which is used for exploratory data analysis to find hidden patterns or grouping in data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Depending on the type and variation in training data, machine learning can be roughly categorized into three frameworks: supervised learning, unsupervised learning, and reinforcement learning. Multiple instance learning (MIL) falls under the supervised learning framework, where every training instance has a label, either discrete or real valued. MIL deals with problems with incomplete knowledge of labels in training sets."}, {"text": "The goals of learning are understanding and prediction. Learning falls into many categories, including supervised learning, unsupervised learning, online learning, and reinforcement learning. From the perspective of statistical learning theory, supervised learning is best understood."}, {"text": "Semi-supervised learning is an approach to machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training. Semi-supervised learning falls between unsupervised learning (with no labeled training data) and supervised learning (with only labeled training data)."}, {"text": "Semi-supervised learning is an approach to machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training. Semi-supervised learning falls between unsupervised learning (with no labeled training data) and supervised learning (with only labeled training data)."}, {"text": "Semi-supervised learning is an approach to machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training. Semi-supervised learning falls between unsupervised learning (with no labeled training data) and supervised learning (with only labeled training data)."}, {"text": "Semi-supervised learning falls between unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data). Some of the training examples are missing training labels, yet many machine-learning researchers have found that unlabeled data, when used in conjunction with a small amount of labeled data, can produce a considerable improvement in learning accuracy."}, {"text": "Semi-supervised learning falls between unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data). Some of the training examples are missing training labels, yet many machine-learning researchers have found that unlabeled data, when used in conjunction with a small amount of labeled data, can produce a considerable improvement in learning accuracy."}]}, {"question": "Is matrix factorization collaborative filtering", "positive_ctxs": [{"text": "Matrix factorization is a class of collaborative filtering algorithms used in recommender systems. Matrix factorization algorithms work by decomposing the user-item interaction matrix into the product of two lower dimensionality rectangular matrices."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In practice, many commercial recommender systems are based on large datasets. As a result, the user-item matrix used for collaborative filtering could be extremely large and sparse, which brings about the challenges in the performances of the recommendation."}, {"text": "Sparsity: The number of items sold on major e-commerce sites is extremely large. The most active users will only have rated a small subset of the overall database. Thus, even the most popular items have very few ratings.One of the most famous examples of collaborative filtering is item-to-item collaborative filtering (people who buy x also buy y), an algorithm popularized by Amazon.com's recommender system.Many social networks originally used collaborative filtering to recommend new friends, groups, and other social connections by examining the network of connections between a user and their friends."}, {"text": "Several collaborative filtering algorithms have been developed to promote diversity and the \"long tail\" by recommending novel, unexpected, and serendipitous items."}, {"text": "In the more general sense, collaborative filtering is the process of filtering for information or patterns using techniques involving collaboration among multiple agents, viewpoints, data sources, etc. Applications of collaborative filtering typically involve very large data sets. Collaborative filtering methods have been applied to many different kinds of data including: sensing and monitoring data, such as in mineral exploration, environmental sensing over large areas or multiple sensors; financial data, such as financial service institutions that integrate many financial sources; or in electronic commerce and web applications where the focus is on user data, etc."}, {"text": "Collaborative filtering (CF) is a technique used by recommender systems. Collaborative filtering has two senses, a narrow one and a more general one.In the newer, narrower sense, collaborative filtering is a method of making automatic predictions (filtering) about the interests of a user by collecting preferences or taste information from many users (collaborating). The underlying assumption of the collaborative filtering approach is that if a person A has the same opinion as a person B on an issue, A is more likely to have B's opinion on a different issue than that of a randomly chosen person."}, {"text": "The motivation for collaborative filtering comes from the idea that people often get the best recommendations from someone with tastes similar to themselves. Collaborative filtering encompasses techniques for matching people with similar interests and making recommendations on this basis."}, {"text": "User-item matrix is a basic foundation of traditional collaborative filtering techniques, and it suffers from data sparsity problem (i.e. As a consequence, except for user-item matrix, researchers are trying to gather more auxiliary information to help boost recommendation performance and develop personalized recommender systems. Generally, there are two popular auxiliary information: attribute information and interaction information."}]}, {"question": "Why is the binomial theorem useful", "positive_ctxs": [{"text": "The theorem and its generalizations can be used to prove results and solve problems in combinatorics, algebra, calculus, and many other areas of mathematics. The binomial theorem also helps explore probability in an organized way: A friend says that she will flip a coin 5 times."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "When working in more dimensions, it is often useful to deal with products of binomial expressions. By the binomial theorem this is equal to"}, {"text": "The binomial theorem is valid more generally for two elements x and y in a ring, or even a semiring, provided that xy = yx. For example, it holds for two n \u00d7 n matrices, provided that those matrices commute; this is useful in computing powers of a matrix.The binomial theorem can be stated by saying that the polynomial sequence {1, x, x2, x3, ...} is of binomial type."}, {"text": "The binomial theorem is closely related to the probability mass function of the negative binomial distribution. The probability of a (countable) collection of independent Bernoulli trials"}, {"text": "Special cases of the binomial theorem were known since at least the 4th century BC when Greek mathematician Euclid mentioned the special case of the binomial theorem for exponent 2. There is evidence that the binomial theorem for cubes was known by the 6th century AD in India.Binomial coefficients, as combinatorial quantities expressing the number of ways of selecting k objects out of n without replacement, were of interest to ancient Indian mathematicians. The earliest known reference to this combinatorial problem is the Chanda\u1e25\u015b\u0101stra by the Indian lyricist Pingala (c. 200 BC), which contains a method for its solution."}, {"text": "The nature of \u03c6 can be seen from an example. Let d = 2, so we get the special case of the quadratic kernel. After using the multinomial theorem (twice\u2014the outermost application is the binomial theorem) and regrouping,"}, {"text": "Here, the superscript (n) indicates the nth derivative of a function. If one sets f(x) = eax and g(x) = ebx, and then cancels the common factor of e(a + b)x from both sides of the result, the ordinary binomial theorem is recovered."}, {"text": "Around 1665, Isaac Newton generalized the binomial theorem to allow real exponents other than nonnegative integers. (The same generalization also applies to complex exponents.) In this generalization, the finite sum is replaced by an infinite series."}]}, {"question": "Is linear regression A least squares", "positive_ctxs": [{"text": "Linear least squares regression is by far the most widely used modeling method. It is what most people mean when they say they have used \"regression\", \"linear regression\" or \"least squares\" to fit a model to their data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "is still assumed, with a matrix B replacing the vector \u03b2 of the classical linear regression model. Multivariate analogues of ordinary least squares (OLS) and generalized least squares (GLS) have been developed. \"General linear models\" are also called \"multivariate linear models\"."}, {"text": "is still assumed, with a matrix B replacing the vector \u03b2 of the classical linear regression model. Multivariate analogues of ordinary least squares (OLS) and generalized least squares (GLS) have been developed. \"General linear models\" are also called \"multivariate linear models\"."}]}, {"question": "How do you interpret lambda in SPSS", "positive_ctxs": [{"text": "0:1110:28\u0627\u0644\u0645\u0642\u0637\u0639 \u0627\u0644\u0645\u0642\u062a\u0631\u062d \u00b7 110 \u062b\u0627\u0646\u064a\u0629Lambda Measure of Association for Two Nominal Variables in SPSS YouTube\u0628\u062f\u0627\u064a\u0629 \u0627\u0644\u0645\u0642\u0637\u0639 \u0627\u0644\u0645\u0642\u062a\u0631\u064e\u062d\u0646\u0647\u0627\u064a\u0629 \u0627\u0644\u0645\u0642\u0637\u0639 \u0627\u0644\u0645\u0642\u062a\u0631\u064e\u062d"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Asymmetric lambda measures the percentage improvement in predicting the dependent variable. Symmetric lambda measures the percentage improvement when prediction is done in both directions."}, {"text": "Asymmetric lambda measures the percentage improvement in predicting the dependent variable. Symmetric lambda measures the percentage improvement when prediction is done in both directions."}, {"text": "Asymmetric lambda measures the percentage improvement in predicting the dependent variable. Symmetric lambda measures the percentage improvement when prediction is done in both directions."}, {"text": "SPSS Statistics is a software package used for interactive, or batched, statistical analysis. Long produced by SPSS Inc., it was acquired by IBM in 2009. Current versions (post 2015) have the brand name: IBM SPSS Statistics."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "SPSS is a widely used program for statistical analysis in social science. It is also used by market researchers, health researchers, survey companies, government, education researchers, marketing organizations, data miners, and others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as one of \"sociology's most influential books\" for allowing ordinary researchers to do their own statistical analysis."}]}, {"question": "Why are statistics random variables", "positive_ctxs": [{"text": "A random variable can be either discrete (having specific values) or continuous (any value in a continuous range). The use of random variables is most common in probability and statistics, where they are used to quantify outcomes of random occurrences."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics and in probability theory, distance correlation or distance covariance is a measure of dependence between two paired random vectors of arbitrary, not necessarily equal, dimension. The population distance correlation coefficient is zero if and only if the random vectors are independent. Thus, distance correlation measures both linear and nonlinear association between two random variables or random vectors."}, {"text": "Bayesian linear regression applies the framework of Bayesian statistics to linear regression. (See also Bayesian multivariate linear regression.) In particular, the regression coefficients \u03b2 are assumed to be random variables with a specified prior distribution."}, {"text": "Bayesian linear regression applies the framework of Bayesian statistics to linear regression. (See also Bayesian multivariate linear regression.) In particular, the regression coefficients \u03b2 are assumed to be random variables with a specified prior distribution."}, {"text": "Bayesian linear regression applies the framework of Bayesian statistics to linear regression. (See also Bayesian multivariate linear regression.) In particular, the regression coefficients \u03b2 are assumed to be random variables with a specified prior distribution."}, {"text": "Bayesian linear regression applies the framework of Bayesian statistics to linear regression. (See also Bayesian multivariate linear regression.) In particular, the regression coefficients \u03b2 are assumed to be random variables with a specified prior distribution."}, {"text": "Bayesian linear regression applies the framework of Bayesian statistics to linear regression. (See also Bayesian multivariate linear regression.) In particular, the regression coefficients \u03b2 are assumed to be random variables with a specified prior distribution."}, {"text": "Bayesian linear regression applies the framework of Bayesian statistics to linear regression. (See also Bayesian multivariate linear regression.) In particular, the regression coefficients \u03b2 are assumed to be random variables with a specified prior distribution."}]}, {"question": "Why do we use KNN algorithm", "positive_ctxs": [{"text": "KNN algorithm is one of the simplest classification algorithm and it is one of the most used learning algorithms. KNN is a non-parametric, lazy learning algorithm. Its purpose is to use a database in which the data points are separated into several classes to predict the classification of a new sample point."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If we simply compared the methods based on their in-sample error rates, the KNN method would likely appear to perform better, since it is more flexible and hence more prone to overfitting compared to the SVM method."}, {"text": "If we simply compared the methods based on their in-sample error rates, the KNN method would likely appear to perform better, since it is more flexible and hence more prone to overfitting compared to the SVM method."}, {"text": "If we simply compared the methods based on their in-sample error rates, the KNN method would likely appear to perform better, since it is more flexible and hence more prone to overfitting compared to the SVM method."}, {"text": "These metaphors are prevalent in communication and we do not just use them in language; we actually perceive and act in accordance with the metaphors."}, {"text": "For instance, in computational complexity, it is unknown whether P = BPP, i.e., we do not know whether we can take an arbitrary randomized algorithm that runs in polynomial time with a small error probability and derandomize it to run in polynomial time without using randomness."}, {"text": "\"Marvin Minsky writes \"This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence? \"Nick Bostrom observes that \"A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore.\""}, {"text": "\"The art of a right decision: Why decision makers want to know the odds-algorithm.\" Newsletter of the European Mathematical Society, Issue 62, 14\u201320, (2006)"}]}, {"question": "What is a null hypothesis example", "positive_ctxs": [{"text": "A null hypothesis is a type of hypothesis used in statistics that proposes that there is no difference between certain characteristics of a population (or data-generating process). For example, a gambler may be interested in whether a game of chance is fair."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The null hypothesis is that the mean value of X is a given number \u03bc0. We can use X as a test-statistic, rejecting the null hypothesis if X \u2212 \u03bc0 is large."}, {"text": "The statistical power of a test is the probability that it correctly rejects the null hypothesis when the null hypothesis is false."}, {"text": "The statistical power of a test is the probability that it correctly rejects the null hypothesis when the null hypothesis is false."}, {"text": "The statistical power of a test is the probability that it correctly rejects the null hypothesis when the null hypothesis is false."}, {"text": "The statistical power of a test is the probability that it correctly rejects the null hypothesis when the null hypothesis is false."}, {"text": "The statistical power of a test is the probability that it correctly rejects the null hypothesis when the null hypothesis is false."}, {"text": "In the example above, the confidence interval only tells us that there is roughly a 50% chance that the p-value is smaller than 0.05, i.e. it is completely unclear whether the null hypothesis should be rejected at a level"}]}, {"question": "Is frequently referred to as K means clustering", "positive_ctxs": [{"text": "Non-hierarchical clustering is frequently referred to as k-means clustering. This type of clustering does not require all possible distances to be computed in a large data set. This technique is primarily used for the analysis of clusters in data mining."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "In this analysis, you need to use the adjusted means and adjusted MSerror. The adjusted means (also referred to as least squares means, LS means, estimated marginal means, or EMM) refer to the group means after controlling for the influence of the CV on the DV."}, {"text": "In this analysis, you need to use the adjusted means and adjusted MSerror. The adjusted means (also referred to as least squares means, LS means, estimated marginal means, or EMM) refer to the group means after controlling for the influence of the CV on the DV."}, {"text": "The distinction in computer programs between programs and literal data applies to all formal descriptions and is sometimes referred to as \"two parts\" of a description. In statistical MDL learning, such a description is frequently called a two-part code."}, {"text": "3, which has a goat. He then says to you, \"Do you want to pick door No. Is it to your advantage to switch your choice?"}, {"text": "Unit conversion for temperature differences is simply a matter of multiplying by, e.g., 1 \u00b0F / 1 K (although the ratio is not a constant value). But because some of these scales have origins that do not correspond to absolute zero, conversion from one temperature scale to another requires accounting for that. As a result, simple dimensional analysis can lead to errors if it is ambiguous whether 1 K means the absolute temperature equal to \u2212272.15 \u00b0C, or the temperature difference equal to 1 \u00b0C."}]}, {"question": "What are the 2 types of AI", "positive_ctxs": [{"text": "Artificial intelligence is generally divided into two types \u2013 narrow (or weak) AI and general AI, also known as AGI or strong AI."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Games with more atypical AI include the AI director of Left 4 Dead (2008) and the neuroevolutionary training of platoons in Supreme Commander 2 (2010)."}, {"text": "Games with more atypical AI include the AI director of Left 4 Dead (2008) and the neuroevolutionary training of platoons in Supreme Commander 2 (2010)."}, {"text": "There are 5 pink marbles, 2 blue marbles, and 8 purple marbles. What are the odds in favor of picking a blue marble?Answer: The odds in favour of a blue marble are 2:13. One can equivalently say, that the odds are 13:2 against."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "Naturally, different AI architectures have their various pros and cons. One of the benefits of utility AI is that it is less \"hand-authored\" than many other types of game AI architectures. While behaviors in a utility system are often created individually (and by hand), the interactions and priorities between them are not inherently specified."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "When computing a t-test, it is important to keep in mind the degrees of freedom, which will depend on the level of the predictor (e.g., level 1 predictor or level 2 predictor). For a level 1 predictor, the degrees of freedom are based on the number of level 1 predictors, the number of groups and the number of individual observations. For a level 2 predictor, the degrees of freedom are based on the number of level 2 predictors and the number of groups."}]}, {"question": "What is the purpose of Lasso regression", "positive_ctxs": [{"text": "The goal of lasso regression is to obtain the subset of predictors that minimizes prediction error for a quantitative response variable. The lasso does this by imposing a constraint on the model parameters that causes regression coefficients for some variables to shrink toward zero."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In high-dimension, when number of covariates p is large compared to the sample size n, the LASSO method is one of the classical model-selection strategies. Tibshirani (1997) has proposed a Lasso procedure for the proportional hazard regression parameter. The Lasso estimator of the regression parameter \u03b2 is defined as the minimizer of the opposite of the Cox partial log-likelihood under an L1-norm type constraint."}, {"text": "In high-dimension, when number of covariates p is large compared to the sample size n, the LASSO method is one of the classical model-selection strategies. Tibshirani (1997) has proposed a Lasso procedure for the proportional hazard regression parameter. The Lasso estimator of the regression parameter \u03b2 is defined as the minimizer of the opposite of the Cox partial log-likelihood under an L1-norm type constraint."}, {"text": "One of the prime differences between Lasso and ridge regression is that in ridge regression, as the penalty is increased, all parameters are reduced while still remaining non-zero, while in Lasso, increasing the penalty will cause more and more of the parameters to be driven to zero. This is an advantage of Lasso over ridge regression, as driving parameters to zero deselects the features from the regression. Thus, Lasso automatically selects more relevant features and discards the others, whereas Ridge regression never fully discards any features."}, {"text": "One of the prime differences between Lasso and ridge regression is that in ridge regression, as the penalty is increased, all parameters are reduced while still remaining non-zero, while in Lasso, increasing the penalty will cause more and more of the parameters to be driven to zero. This is an advantage of Lasso over ridge regression, as driving parameters to zero deselects the features from the regression. Thus, Lasso automatically selects more relevant features and discards the others, whereas Ridge regression never fully discards any features."}, {"text": "This regularization function, while attractive for the sparsity that it guarantees, is very difficult to solve because doing so requires optimization of a function that is not even weakly convex. Lasso regression is the minimal possible relaxation of"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Lasso can set coefficients to zero, while the superficially similar ridge regression cannot. This is due to the difference in the shape of their constraint boundaries. Both lasso and ridge regression can be interpreted as minimizing the same objective function"}]}, {"question": "When should we use hierarchical linear models", "positive_ctxs": [{"text": "In a nutshell, hierarchical linear modeling is used when you have nested data; hierarchical regression is used to add or remove variables from your model in multiple steps. Knowing the difference between these two seemingly similar terms can help you determine the most appropriate analysis for your study."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "There are several common parametric empirical Bayes models, including the Poisson\u2013gamma model (below), the Beta-binomial model, the Gaussian\u2013Gaussian model, the Dirichlet-multinomial model, as well specific models for Bayesian linear regression (see below) and Bayesian multivariate linear regression. More advanced approaches include hierarchical Bayes models and Bayesian mixture models."}, {"text": "Multilevel models (also known as hierarchical linear models, linear mixed-effect model, mixed models, nested data models, random coefficient, random-effects models, random parameter models, or split-plot designs) are statistical models of parameters that vary at more than one level. An example could be a model of student performance that contains measures for individual students as well as measures for classrooms within which the students are grouped. These models can be seen as generalizations of linear models (in particular, linear regression), although they can also extend to non-linear models."}, {"text": "is efficient in the class of linear unbiased estimators. This is called the best linear unbiased estimator (BLUE). Efficiency should be understood as if we were to find some other estimator"}, {"text": "is efficient in the class of linear unbiased estimators. This is called the best linear unbiased estimator (BLUE). Efficiency should be understood as if we were to find some other estimator"}, {"text": "When practitioners need to consider multiple models, they can specify a probability-measure on the models and then select any design maximizing the expected value of such an experiment. Such probability-based optimal-designs are called optimal Bayesian designs. Such Bayesian designs are used especially for generalized linear models (where the response follows an exponential-family distribution).The use of a Bayesian design does not force statisticians to use Bayesian methods to analyze the data, however."}, {"text": "Hierarchical RNNs connect their neurons in various ways to decompose hierarchical behavior into useful subprograms . Such hierarchical structures of cognition are present in theories of memory presented by philosopher Henri Bergson, whose philosophical views have inspired hierarchical models ."}, {"text": "Hierarchical RNNs connect their neurons in various ways to decompose hierarchical behavior into useful subprograms . Such hierarchical structures of cognition are present in theories of memory presented by philosopher Henri Bergson, whose philosophical views have inspired hierarchical models ."}]}, {"question": "How do Conditional Random Fields CRF compare to Maximum Entropy Models and Hidden Markov Models", "positive_ctxs": [{"text": "The chief difference between MEMM and CRF is that MEMM is locally renormalized and suffers from the label bias problem, while CRFs are globally renormalized."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Klinger, R., Tomanek, K.: Classical Probabilistic Models and Conditional Random Fields. Algorithm Engineering Report TR07-2-013, Department of Computer Science, Dortmund University of Technology, December 2007."}, {"text": "Sutton, C., McCallum, A.: An Introduction to Conditional Random Fields for Relational Learning. In \"Introduction to Statistical Relational Learning\". Edited by Lise Getoor and Ben Taskar."}, {"text": "Hierarchical Markov models can be applied to categorize human behavior at various levels of abstraction. For example, a series of simple observations, such as a person's location in a room, can be interpreted to determine more complex information, such as in what task or activity the person is performing. Two kinds of Hierarchical Markov Models are the Hierarchical hidden Markov model and the Abstract Hidden Markov Model."}, {"text": "An example could be the activity of preparing a stir fry, which can be broken down into the subactivities or actions of cutting vegetables, frying the vegetables in a pan and serving it on a plate. Examples of such a hierarchical model are Layered Hidden Markov Models (LHMMs) and the hierarchical hidden Markov model (HHMM), which have been shown to significantly outperform its non-hierarchical counterpart in activity recognition."}, {"text": "The Hidden Markov Models were described in a series of statistical papers by Leonard E. Baum and other authors in the second half of the 1960s. One of the first applications of HMMs was speech recognition, starting in the mid-1970s.In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences, in particular DNA. Since then, they have become ubiquitous in the field of bioinformatics."}, {"text": "This convinced many in the field that part-of-speech tagging could usefully be separated from the other levels of processing; this, in turn, simplified the theory and practice of computerized language analysis and encouraged researchers to find ways to separate other pieces as well. Markov Models are now the standard method for the part-of-speech assignment."}, {"text": "; Parzych G.; Pylak M.; Satu\u0142a D.; Dobrzy\u0144ski L. (2010). \"Application of Bayesian reasoning and the Maximum Entropy Method to some reconstruction problems\"."}]}, {"question": "Which is the best model used in Word2Vec algorithm for word embedding", "positive_ctxs": [{"text": "Two different learning models were introduced that can be used as part of the word2vec approach to learn the word embedding; they are: Continuous Bag-of-Words, or CBOW model. Continuous Skip-Gram Model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A \"bucket of models\" is an ensemble technique in which a model selection algorithm is used to choose the best model for each problem. When tested with only one problem, a bucket of models can produce no better results than the best model in the set, but when evaluated across many problems, it will typically produce much better results, on average, than any model in the set."}, {"text": "A \"bucket of models\" is an ensemble technique in which a model selection algorithm is used to choose the best model for each problem. When tested with only one problem, a bucket of models can produce no better results than the best model in the set, but when evaluated across many problems, it will typically produce much better results, on average, than any model in the set."}, {"text": "A \"bucket of models\" is an ensemble technique in which a model selection algorithm is used to choose the best model for each problem. When tested with only one problem, a bucket of models can produce no better results than the best model in the set, but when evaluated across many problems, it will typically produce much better results, on average, than any model in the set."}, {"text": "An algorithm may learn an internal model of the data, which can be used to map points unavailable at training time into the embedding in a process often called out-of-sample extension."}, {"text": "Newton's method requires the 2nd order derivatives, so for each iteration, the number of function calls is in the order of N\u00b2, but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself."}, {"text": "t-distributed stochastic neighbor embedding (t-SNE) is widely used. It is one of a family of stochastic neighbor embedding methods. The algorithm computes the probability that pairs of datapoints in the high-dimensional space are related, and then chooses low-dimensional embeddings which produce a similar distribution."}, {"text": "An extension of word vectors for creating a dense vector representation of unstructured radiology reports has been proposed by Banerjee et al. One of the biggest challenges with Word2Vec is how to handle unknown or out-of-vocabulary (OOV) words and morphologically similar words. This can particularly be an issue in domains like medicine where synonyms and related words can be used depending on the preferred style of radiologist, and words may have been used infrequently in a large corpus."}]}, {"question": "Is artificial intelligence intelligent", "positive_ctxs": [{"text": "Artificial intelligence (AI) is the attempt to let computers perform services for which humans need intelligence. However, this is still not possible today. AI systems are capable of recognizing patterns, learning and making decisions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In 1975 distributed artificial intelligence emerged as a subfield of artificial intelligence that dealt with interactions of intelligent agents[2]. Distributed artificial intelligence systems were conceived as a group of intelligent entities, called agents, that interacted by cooperation, by coexistence or by competition. DAI is categorized into Multi-agent systems and distributed problem solving [1]."}, {"text": "In 1975 distributed artificial intelligence emerged as a subfield of artificial intelligence that dealt with interactions of intelligent agents[2]. Distributed artificial intelligence systems were conceived as a group of intelligent entities, called agents, that interacted by cooperation, by coexistence or by competition. DAI is categorized into Multi-agent systems and distributed problem solving [1]."}, {"text": "In 1975 distributed artificial intelligence emerged as a subfield of artificial intelligence that dealt with interactions of intelligent agents[2]. Distributed artificial intelligence systems were conceived as a group of intelligent entities, called agents, that interacted by cooperation, by coexistence or by competition. DAI is categorized into Multi-agent systems and distributed problem solving [1]."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "Artificial intelligence (or AI) is both the intelligence that is demonstrated by machines and the branch of computer science which aims to create it, through \"the study and design of intelligent agents\" or \"rational agents\", where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. Kaplan and Haenlein define artificial intelligence as \u201ca system\u2019s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation\u201d. Achievements in artificial intelligence include constrained and well-defined problems such as games, crossword-solving and optical character recognition and a few more general problems such as autonomous cars."}, {"text": "As intelligent agents become more popular, there are increasing legal risks involved.Intelligent agents in artificial intelligence are closely related to agents in economics, and versions of the intelligent agent paradigm are studied in cognitive science, ethics, the philosophy of practical reason, as well as in many interdisciplinary socio-cognitive modeling and computer social simulations."}]}, {"question": "What is random effect in mixed model", "positive_ctxs": [{"text": "A random effect model is a model all of whose factors represent random effects. (See Random Effects.) Such models are also called variance component models. Random effect models are often hierarchical models. A model that contains both fixed and random effects is called a mixed model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A mixed model, mixed-effects model or mixed error-component model is a statistical model containing both fixed effects and random effects. These models are useful in a wide variety of disciplines in the physical, biological and social sciences."}, {"text": "Generalized linear mixed models (GLMMs) are an extension to GLMs that includes random effects in the linear predictor, giving an explicit probability model that explains the origin of the correlations. The resulting \"subject-specific\" parameter estimates are suitable when the focus is on estimating the effect of changing one or more components of X on a given individual. GLMMs are also referred to as multilevel models and as mixed model."}, {"text": "Generalized linear mixed models (GLMMs) are an extension to GLMs that includes random effects in the linear predictor, giving an explicit probability model that explains the origin of the correlations. The resulting \"subject-specific\" parameter estimates are suitable when the focus is on estimating the effect of changing one or more components of X on a given individual. GLMMs are also referred to as multilevel models and as mixed model."}, {"text": "As with standard logit, the exploded logit model assumes no correlation in unobserved factors over alternatives. The exploded logit can be generalized, in the same way as the standard logit is generalized, to accommodate correlations among alternatives and random taste variation. The \"mixed exploded logit\" model is obtained by probability of the ranking, given above, for Lni in the mixed logit model (model I)."}, {"text": "A mixed random variable is a random variable whose cumulative distribution function is neither piecewise-constant (a discrete random variable) nor everywhere-continuous. It can be realized as the sum of a discrete random variable and a continuous random variable; in which case the CDF will be the weighted average of the CDFs of the component variables.An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, X = \u22121; otherwise X = the value of the spinner as in the preceding example."}, {"text": "A mixed random variable is a random variable whose cumulative distribution function is neither piecewise-constant (a discrete random variable) nor everywhere-continuous. It can be realized as the sum of a discrete random variable and a continuous random variable; in which case the CDF will be the weighted average of the CDFs of the component variables.An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, X = \u22121; otherwise X = the value of the spinner as in the preceding example."}, {"text": "One of the benefits of using growth function such as generalized logistic function in epidemiological modeling is its relatively easy expansion to the multilevel model framework by using the growth function to describe infection trajectories from multiple subjects (countries, cities, states, etc). Such a modeling framework can be also widely called the nonlinear mixed effect model or hierarchical nonlinear model. An example of using the generalized logistic function in Bayesian multilevel model is the Bayesian hierarchical Richards model."}]}, {"question": "What are uses of discrete distributions in real life", "positive_ctxs": [{"text": "Introduction Statistical discrete processes \u2013 for example, the number of accidents per driver, the number of insects per leaf in an orchard, the number of thunderstorms per year, the number of earthquakes per year, the number of patients visit emergency room in a certain hospital per day - often occur in real life."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Engaged: real life tasks are reflected in the activities conducted for learning.Active learning requires appropriate learning environments through the implementation of correct strategy. Characteristics of learning environment are:"}, {"text": "The above can be extended in a simple way to allow consideration of distributions which contain both discrete and continuous components. Suppose that the distribution consists of a number of discrete probability masses"}, {"text": "The above can be extended in a simple way to allow consideration of distributions which contain both discrete and continuous components. Suppose that the distribution consists of a number of discrete probability masses"}, {"text": "are usually regarded as continuous charge distributions, even though all real charge distributions are made up of discrete charged particles. Due to the conservation of electric charge, the charge density in any volume can only change if an electric current of charge flows into or out of the volume. This is expressed by a continuity equation which links the rate of change of charge density"}, {"text": "is purely discrete or mixed, implemented in C++ and in the KSgeneral package of the R language. The functions disc_ks_test(), mixed_ks_test() and cont_ks_test() compute also the KS test statistic and p-values for purely discrete, mixed or continuous null distributions and arbitrary sample sizes. The KS test and its p-values for discrete null distributions and small sample sizes are also computed in as part of the dgof package of the R language."}, {"text": "In the same way as the logarithm reverses exponentiation, the complex logarithm is the inverse function of the exponential function, whether applied to real numbers or complex numbers. The modular discrete logarithm is another variant; it has uses in public-key cryptography."}, {"text": "Most commonly, though, the rule fails in discrete distributions where the areas to the left and right of the median are not equal. Such distributions not only contradict the textbook relationship between mean, median, and skew, they also contradict the textbook interpretation of the median."}]}, {"question": "What products use artificial intelligence", "positive_ctxs": [{"text": "Artificial Intelligence ExamplesManufacturing robots.Smart assistants.Proactive healthcare management.Disease mapping.Automated financial investing.Virtual travel booking agent.Social media monitoring.Inter-team chat tool.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The Air Operations Division (AOD) uses AI for the rule based expert systems. The AOD has use for artificial intelligence for surrogate operators for combat and training simulators, mission management aids, support systems for tactical decision making, and post processing of the simulator data into symbolic summaries.The use of artificial intelligence in simulators is proving to be very useful for the AOD. Airplane simulators are using artificial intelligence in order to process the data taken from simulated flights."}, {"text": "The Air Operations Division (AOD) uses AI for the rule based expert systems. The AOD has use for artificial intelligence for surrogate operators for combat and training simulators, mission management aids, support systems for tactical decision making, and post processing of the simulator data into symbolic summaries.The use of artificial intelligence in simulators is proving to be very useful for the AOD. Airplane simulators are using artificial intelligence in order to process the data taken from simulated flights."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "In 1975 distributed artificial intelligence emerged as a subfield of artificial intelligence that dealt with interactions of intelligent agents[2]. Distributed artificial intelligence systems were conceived as a group of intelligent entities, called agents, that interacted by cooperation, by coexistence or by competition. DAI is categorized into Multi-agent systems and distributed problem solving [1]."}, {"text": "In 1975 distributed artificial intelligence emerged as a subfield of artificial intelligence that dealt with interactions of intelligent agents[2]. Distributed artificial intelligence systems were conceived as a group of intelligent entities, called agents, that interacted by cooperation, by coexistence or by competition. DAI is categorized into Multi-agent systems and distributed problem solving [1]."}, {"text": "In 1975 distributed artificial intelligence emerged as a subfield of artificial intelligence that dealt with interactions of intelligent agents[2]. Distributed artificial intelligence systems were conceived as a group of intelligent entities, called agents, that interacted by cooperation, by coexistence or by competition. DAI is categorized into Multi-agent systems and distributed problem solving [1]."}]}, {"question": "When Kruskal Wallis test is used", "positive_ctxs": [{"text": "The Kruskal-Wallis H test (sometimes also called the \"one-way ANOVA on ranks\") is a rank-based nonparametric test that can be used to determine if there are statistically significant differences between two or more groups of an independent variable on a continuous or ordinal dependent variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The Kruskal\u2013Wallis test by ranks, Kruskal\u2013Wallis H test (named after William Kruskal and W. Allen Wallis), or one-way ANOVA on ranks is a non-parametric method for testing whether samples originate from the same distribution. It is used for comparing two or more independent samples of equal or different sample sizes. It extends the Mann\u2013Whitney U test, which is used for comparing only two groups."}, {"text": "The following argument is the result of a suggestion made by Graham Wallis to E. T. Jaynes in 1962. It is essentially the same mathematical argument used for the Maxwell\u2013Boltzmann statistics in statistical mechanics, although the conceptual emphasis is quite different. It has the advantage of being strictly combinatorial in nature, making no reference to information entropy as a measure of 'uncertainty', 'uninformativeness', or any other imprecisely defined concept."}, {"text": "The following argument is the result of a suggestion made by Graham Wallis to E. T. Jaynes in 1962. It is essentially the same mathematical argument used for the Maxwell\u2013Boltzmann statistics in statistical mechanics, although the conceptual emphasis is quite different. It has the advantage of being strictly combinatorial in nature, making no reference to information entropy as a measure of 'uncertainty', 'uninformativeness', or any other imprecisely defined concept."}, {"text": "Conditional logistic regression is more general than the CMH test as it can handle continuous variable and perform multivariate analysis. When the CMH test can be applied, the CMH test statistic and the score test statistic of the conditional logistic regression are identical."}, {"text": "Conditional logistic regression is more general than the CMH test as it can handle continuous variable and perform multivariate analysis. When the CMH test can be applied, the CMH test statistic and the score test statistic of the conditional logistic regression are identical."}, {"text": "Box's M test is a multivariate statistical test used to check the equality of multiple variance-covariance matrices. The test is commonly used to test the assumption of homogeneity of variances and covariances in MANOVA and linear discriminant analysis. It is named after George E. P. Box who first discussed the test in 1949."}, {"text": "In contrast to permutation tests, the distributions underlying many popular \"classical\" statistical tests, such as the t-test, F-test, z-test, and \u03c72 test, are obtained from theoretical probability distributions. Fisher's exact test is an example of a commonly used permutation test for evaluating the association between two dichotomous variables. When sample sizes are very large, the Pearson's chi-square test will give accurate results."}]}, {"question": "What is r squared change in regression", "positive_ctxs": [{"text": "R-squared is a goodness-of-fit measure for linear regression models. This statistic indicates the percentage of the variance in the dependent variable that the independent variables explain collectively. After fitting a linear regression model, you need to determine how well the model fits the data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What SIT, connectionism, and DST have in common is that they describe nonlinear system behavior, that is, a minor change in the input may yield a major change in the output. Their complementarity expresses itself in that they focus on different aspects:"}, {"text": "Closely related is the subject of least absolute deviations, a method of regression that is more robust to outliers than is least squares, in which the sum of the absolute value of the observed errors is used in place of the squared error. The connection is that the mean is the single estimate of a distribution that minimizes expected squared error while the median minimizes expected absolute error. Least absolute deviations shares the ability to be relatively insensitive to large deviations in outlying observations, although even better methods of robust regression are available."}, {"text": "Closely related is the subject of least absolute deviations, a method of regression that is more robust to outliers than is least squares, in which the sum of the absolute value of the observed errors is used in place of the squared error. The connection is that the mean is the single estimate of a distribution that minimizes expected squared error while the median minimizes expected absolute error. Least absolute deviations shares the ability to be relatively insensitive to large deviations in outlying observations, although even better methods of robust regression are available."}, {"text": "Closely related is the subject of least absolute deviations, a method of regression that is more robust to outliers than is least squares, in which the sum of the absolute value of the observed errors is used in place of the squared error. The connection is that the mean is the single estimate of a distribution that minimizes expected squared error while the median minimizes expected absolute error. Least absolute deviations shares the ability to be relatively insensitive to large deviations in outlying observations, although even better methods of robust regression are available."}, {"text": "In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient \u2013 the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t test."}, {"text": "In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient \u2013 the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t test."}, {"text": "In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient \u2013 the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t test."}]}, {"question": "What do you mean by Perceptron and its learning rule", "positive_ctxs": [{"text": "Perceptron Learning Rule states that the algorithm would automatically learn the optimal weight coefficients. The input features are then multiplied with these weights to determine if a neuron fires or not."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "You are allowed to select k of these n boxes all at once and break them open simultaneously, gaining access to k keys. What is the probability that using these keys you can open all n boxes, where you use a found key to open the box it belongs to and repeat."}, {"text": "Now, assume (for example) that there are 5 green and 45 red marbles in the urn. Standing next to the urn, you close your eyes and draw 10 marbles without replacement. What is the probability that exactly 4 of the 10 are green?"}, {"text": "A learning rule may accept existing conditions (weights and biases) of the network and will compare the expected result and actual result of the network to give new and improved values for weights and bias. Depending on the complexity of actual model being simulated, the learning rule of the network can be as simple as an XOR gate or mean squared error, or as complex as the result of a system of differential equations."}, {"text": "A critical concept in LCS and rule-based machine learning alike, is that an individual rule is not in itself a model, since the rule is only applicable when its condition is satisfied. Think of a rule as a \"local-model\" of the solution space."}, {"text": "Therefore, seeing this advertisement could lead people astray to start smoking because of its induced appeal. In a study by Slovic et al. (2005), he released a survey to smokers in which he asked \u201cIf you had it to do all over again, would you start smoking?\u201d and more than 85% of adult smokers and about 80% of young smokers (between the ages of 14-22) answered \u201cNo.\u201d He found that most smokers, especially those that start at a younger age, do not take the time and think about how their future selves will perceive the risks associated with smoking."}, {"text": "Knowing the rule: this is a difficult condition to meet, because even the best students do not learn every rule that is taught, cannot remember every rule they have learned, and can't always correctly apply the rules they do remember. Furthermore, not every rule of a language is always included in a text or taught by the teacher."}]}, {"question": "What is the difference between t test and Mann Whitney test", "positive_ctxs": [{"text": "Unlike the independent-samples t-test, the Mann-Whitney U test allows you to draw different conclusions about your data depending on the assumptions you make about your data's distribution. These different conclusions hinge on the shape of the distributions of your data, which we explain more about later."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The power of the test is the probability that the test will find a statistically significant difference between men and women, as a function of the size of the true difference between those two populations."}, {"text": "A chi-squared test, also written as \u03c72 test, is a statistical hypothesis test that is valid to perform when the test statistic is chi-squared distributed under the null hypothesis, specifically Pearson's chi-squared test and variants thereof. Pearson's chi-squared test is used to determine whether there is a statistically significant difference between the expected frequencies and the observed frequencies in one or more categories of a contingency table."}, {"text": "A chi-squared test, also written as \u03c72 test, is a statistical hypothesis test that is valid to perform when the test statistic is chi-squared distributed under the null hypothesis, specifically Pearson's chi-squared test and variants thereof. Pearson's chi-squared test is used to determine whether there is a statistically significant difference between the expected frequencies and the observed frequencies in one or more categories of a contingency table."}, {"text": "The test proceeds as follows. First, the difference in means between the two samples is calculated: this is the observed value of the test statistic,"}, {"text": "The test proceeds as follows. First, the difference in means between the two samples is calculated: this is the observed value of the test statistic,"}, {"text": "The test proceeds as follows. First, the difference in means between the two samples is calculated: this is the observed value of the test statistic,"}, {"text": "The test proceeds as follows. First, the difference in means between the two samples is calculated: this is the observed value of the test statistic,"}]}, {"question": "Can two events be independent and disjoint", "positive_ctxs": [{"text": "Two disjoint events can never be independent, except in the case that one of the events is null. Events are considered disjoint if they never occur at the same time. For example, being a freshman and being a sophomore would be considered disjoint events. Independent events are unrelated events."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "When dealing with collections of more than two events, a weak and a strong notion of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while saying that the events are mutually independent (or collectively independent) intuitively means that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables."}, {"text": "When dealing with collections of more than two events, a weak and a strong notion of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while saying that the events are mutually independent (or collectively independent) intuitively means that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables."}, {"text": "When dealing with collections of more than two events, a weak and a strong notion of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while saying that the events are mutually independent (or collectively independent) intuitively means that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables."}, {"text": "The concepts of mutually independent events and mutually exclusive events are separate and distinct. The following table contrasts results for the two cases (provided that the probability of the conditioning event is not zero)."}, {"text": "For example, {1, 2, 3} and {4, 5, 6} are disjoint sets, while {1, 2, 3} and {3, 4, 5} are not disjoint. A collection of more than two sets is called disjoint if any two distinct sets of the collection are disjoint."}, {"text": "In logic and probability theory, two events (or propositions) are mutually exclusive or disjoint if they cannot both occur at the same time. A clear example is the set of outcomes of a single coin toss, which can result in either heads or tails, but not both."}, {"text": "If the rule had a lift of 1, it would imply that the probability of occurrence of the antecedent and that of the consequent are independent of each other. When two events are independent of each other, no rule can be drawn involving those two events."}]}, {"question": "What is the difference between interpolation and extrapolation", "positive_ctxs": [{"text": "Interpolation refers to using the data in order to predict data within the dataset. Extrapolation is the use of the data set to predict beyond the data set."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This is often done by using a related series known for all relevant dates. Alternatively polynomial interpolation or spline interpolation is used where piecewise polynomial functions are fit into time intervals such that they fit smoothly together. A different problem which is closely related to interpolation is the approximation of a complicated function by a simple function (also called regression).The main difference between regression and interpolation is that polynomial regression gives a single polynomial that models the entire data set."}, {"text": "This is often done by using a related series known for all relevant dates. Alternatively polynomial interpolation or spline interpolation is used where piecewise polynomial functions are fit into time intervals such that they fit smoothly together. A different problem which is closely related to interpolation is the approximation of a complicated function by a simple function (also called regression).The main difference between regression and interpolation is that polynomial regression gives a single polynomial that models the entire data set."}, {"text": "This is often done by using a related series known for all relevant dates. Alternatively polynomial interpolation or spline interpolation is used where piecewise polynomial functions are fit into time intervals such that they fit smoothly together. A different problem which is closely related to interpolation is the approximation of a complicated function by a simple function (also called regression).The main difference between regression and interpolation is that polynomial regression gives a single polynomial that models the entire data set."}, {"text": "Performing extrapolation relies strongly on the regression assumptions. The further the extrapolation goes outside the data, the more room there is for the model to fail due to differences between the assumptions and the sample data or the true values."}, {"text": "Performing extrapolation relies strongly on the regression assumptions. The further the extrapolation goes outside the data, the more room there is for the model to fail due to differences between the assumptions and the sample data or the true values."}, {"text": "Performing extrapolation relies strongly on the regression assumptions. The further the extrapolation goes outside the data, the more room there is for the model to fail due to differences between the assumptions and the sample data or the true values."}, {"text": "Performing extrapolation relies strongly on the regression assumptions. The further the extrapolation goes outside the data, the more room there is for the model to fail due to differences between the assumptions and the sample data or the true values."}]}, {"question": "Which is the best algorithm for checking string similarity metric", "positive_ctxs": [{"text": "The most popular is definitely KMP, if you need fast string matching without any particular usecase in mind it's what you should use. Here are your options(with time complexity): Brute Force O(nm) Knuth\u2013Morris\u2013Pratt algorithm - O(n)"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A method based on proximity matrices is one where the data is presented to the algorithm in the form of a similarity matrix or a distance matrix. These methods all fall under the broader class of metric multidimensional scaling. The variations tend to be differences in how the proximity data is computed; for example, Isomap, locally linear embeddings, maximum variance unfolding, and Sammon mapping (which is not in fact a mapping) are examples of metric multidimensional scaling methods."}, {"text": "The term \"cosine similarity\" is sometimes used to refer to a different definition of similarity provided below. However the most common use of \"cosine similarity\" is as defined above and the similarity and distance metrics defined below are referred to as \"angular similarity\" and \"angular distance\" respectively. The normalized angle between the vectors is a formal distance metric and can be calculated from the similarity score defined above."}, {"text": "The term \"cosine similarity\" is sometimes used to refer to a different definition of similarity provided below. However the most common use of \"cosine similarity\" is as defined above and the similarity and distance metrics defined below are referred to as \"angular similarity\" and \"angular distance\" respectively. The normalized angle between the vectors is a formal distance metric and can be calculated from the similarity score defined above."}, {"text": "Newton's method requires the 2nd order derivatives, so for each iteration, the number of function calls is in the order of N\u00b2, but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself."}, {"text": "When a clustering result is evaluated based on the data that was clustered itself, this is called internal evaluation. These methods usually assign the best score to the algorithm that produces clusters with high similarity within a cluster and low similarity between clusters. One drawback of using internal criteria in cluster evaluation is that high scores on an internal measure do not necessarily result in effective information retrieval applications."}, {"text": "When a clustering result is evaluated based on the data that was clustered itself, this is called internal evaluation. These methods usually assign the best score to the algorithm that produces clusters with high similarity within a cluster and low similarity between clusters. One drawback of using internal criteria in cluster evaluation is that high scores on an internal measure do not necessarily result in effective information retrieval applications."}, {"text": "Kolmogorov randomness defines a string (usually of bits) as being random if and only if any computer program that can produce that string is at least as long as the string itself. To make this precise, a universal computer (or universal Turing machine) must be specified, so that \"program\" means a program for this universal machine. A random string in this sense is \"incompressible\" in that it is impossible to \"compress\" the string into a program that is shorter than the string itself."}]}, {"question": "What is a recommended way to visualize categorical data", "positive_ctxs": [{"text": "To visualize a small data set containing multiple categorical (or qualitative) variables, you can create either a bar plot, a balloon plot or a mosaic plot. These methods make it possible to analyze and visualize the association (i.e. correlation) between a large number of qualitative variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In statistics, multiple correspondence analysis (MCA) is a data analysis technique for nominal categorical data, used to detect and represent underlying structures in a data set. It does this by representing data as points in a low-dimensional Euclidean space. The procedure thus appears to be the counterpart of principal component analysis for categorical data."}, {"text": "Data aggregation involves combining data together (possibly from various sources) in a way that facilitates analysis (but that also might make identification of private, individual-level data deducible or otherwise apparent). This is not data mining per se, but a result of the preparation of data before\u2014and for the purposes of\u2014the analysis. The threat to an individual's privacy comes into play when the data, once compiled, cause the data miner, or anyone who has access to the newly compiled data set, to be able to identify specific individuals, especially when the data were originally anonymous.It is recommended to be aware of the following before data are collected:"}, {"text": "Western legal frameworks emphasize more and more on data protection and data traceability. White House 2012 Report recommended the application of a data minimization principle, which is mentioned in European GDPR. In some cases, it is illegal to transfer data from a country to another (e.g., genomic data), however international consortia are sometimes necessary for scientific advances."}, {"text": "Western legal frameworks emphasize more and more on data protection and data traceability. White House 2012 Report recommended the application of a data minimization principle, which is mentioned in European GDPR. In some cases, it is illegal to transfer data from a country to another (e.g., genomic data), however international consortia are sometimes necessary for scientific advances."}]}, {"question": "What is ROC AUC score", "positive_ctxs": [{"text": "AUC - ROC curve is a performance measurement for classification problem at various thresholds settings. ROC is a probability curve and AUC represents degree or measure of separability. By analogy, Higher the AUC, better the model is at distinguishing between patients with disease and no disease."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "There is a summary measure of the diagnostic ability of a binary classifier system that is also called Gini coefficient, which is defined as twice the area between the receiver operating characteristic (ROC) curve and its diagonal. It is related to the AUC (Area Under the ROC Curve) measure of performance given by"}, {"text": "Nonetheless, the coherence of AUC as a measure of aggregated classification performance has been vindicated, in terms of a uniform rate distribution, and AUC has been linked to a number of other performance metrics such as the Brier score.Another problem with ROC AUC is that reducing the ROC Curve to a single number ignores the fact that it is about the tradeoffs between the different systems or performance points plotted and not the performance of an individual system, as well as ignoring the possibility of concavity repair, so that related alternative measures such as Informedness or DeltaP are recommended. These measures are essentially equivalent to the Gini for a single prediction point with DeltaP' = Informedness = 2AUC-1, whilst DeltaP = Markedness represents the dual (viz. predicting the prediction from the real class) and their geometric mean is the Matthews correlation coefficient.Whereas ROC AUC varies between 0 and 1 \u2014 with an uninformative classifier yielding 0.5 \u2014 the alternative measures known as Informedness, Certainty and Gini Coefficient (in the single parameterization or single system case) all have the advantage that 0 represents chance performance whilst 1 represents perfect performance, and \u22121 represents the \"perverse\" case of full informedness always giving the wrong response."}, {"text": "Nonetheless, the coherence of AUC as a measure of aggregated classification performance has been vindicated, in terms of a uniform rate distribution, and AUC has been linked to a number of other performance metrics such as the Brier score.Another problem with ROC AUC is that reducing the ROC Curve to a single number ignores the fact that it is about the tradeoffs between the different systems or performance points plotted and not the performance of an individual system, as well as ignoring the possibility of concavity repair, so that related alternative measures such as Informedness or DeltaP are recommended. These measures are essentially equivalent to the Gini for a single prediction point with DeltaP' = Informedness = 2AUC-1, whilst DeltaP = Markedness represents the dual (viz. predicting the prediction from the real class) and their geometric mean is the Matthews correlation coefficient.Whereas ROC AUC varies between 0 and 1 \u2014 with an uninformative classifier yielding 0.5 \u2014 the alternative measures known as Informedness, Certainty and Gini Coefficient (in the single parameterization or single system case) all have the advantage that 0 represents chance performance whilst 1 represents perfect performance, and \u22121 represents the \"perverse\" case of full informedness always giving the wrong response."}]}, {"question": "What is gradient boosting used for", "positive_ctxs": [{"text": "Gradient boosting is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Gradient boosting can be used in the field of learning to rank. The commercial web search engines Yahoo and Yandex use variants of gradient boosting in their machine-learned ranking engines. Gradient boosting is also utilized in High Energy Physics in data analysis."}, {"text": "Gradient boosting can be used in the field of learning to rank. The commercial web search engines Yahoo and Yandex use variants of gradient boosting in their machine-learned ranking engines. Gradient boosting is also utilized in High Energy Physics in data analysis."}, {"text": "Gradient boosting can be used in the field of learning to rank. The commercial web search engines Yahoo and Yandex use variants of gradient boosting in their machine-learned ranking engines. Gradient boosting is also utilized in High Energy Physics in data analysis."}, {"text": "The Savage loss is quasi-convex and is bounded for large negative values which makes it less sensitive to outliers. The Savage loss has been used in gradient boosting and the SavageBoost algorithm."}, {"text": "Gradient boosting is typically used with decision trees (especially CART trees) of a fixed size as base learners. For this special case, Friedman proposes a modification to gradient boosting method which improves the quality of fit of each base learner."}, {"text": "Gradient boosting is typically used with decision trees (especially CART trees) of a fixed size as base learners. For this special case, Friedman proposes a modification to gradient boosting method which improves the quality of fit of each base learner."}, {"text": "Gradient boosting is typically used with decision trees (especially CART trees) of a fixed size as base learners. For this special case, Friedman proposes a modification to gradient boosting method which improves the quality of fit of each base learner."}]}, {"question": "Can random forest handle correlated variables", "positive_ctxs": [{"text": "Random forest (RF) is a machine-learning method that generally works well with high-dimensional problems and allows for nonlinear relationships between predictors; however, the presence of correlated predictors has been shown to impact its ability to identify strong predictors."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Regularized trees naturally handle numerical and categorical features, interactions and nonlinearities. They are invariant to attribute scales (units) and insensitive to outliers, and thus, require little data preprocessing such as normalization. Regularized random forest (RRF) is one type of regularized trees."}, {"text": "Regularized trees naturally handle numerical and categorical features, interactions and nonlinearities. They are invariant to attribute scales (units) and insensitive to outliers, and thus, require little data preprocessing such as normalization. Regularized random forest (RRF) is one type of regularized trees."}, {"text": "As part of their construction, random forest predictors naturally lead to a dissimilarity measure among the observations. One can also define a random forest dissimilarity measure between unlabeled data: the idea is to construct a random forest predictor that distinguishes the \u201cobserved\u201d data from suitably generated synthetic data."}, {"text": "The observed data are the original unlabeled data and the synthetic data are drawn from a reference distribution. A random forest dissimilarity can be attractive because it handles mixed variable types very well, is invariant to monotonic transformations of the input variables, and is robust to outlying observations. The random forest dissimilarity easily deals with a large number of semi-continuous variables due to its intrinsic variable selection; for example, the \"Addcl 1\" random forest dissimilarity weighs the contribution of each variable according to how dependent it is on other variables."}, {"text": "The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value."}, {"text": "They are often relatively inaccurate. Many other predictors perform better with similar data. This can be remedied by replacing a single decision tree with a random forest of decision trees, but a random forest is not as easy to interpret as a single decision tree."}, {"text": "They are often relatively inaccurate. Many other predictors perform better with similar data. This can be remedied by replacing a single decision tree with a random forest of decision trees, but a random forest is not as easy to interpret as a single decision tree."}]}, {"question": "What is the difference between machine learning and regression", "positive_ctxs": [{"text": "So regression performance is measured by how close it fits an expected line/curve, while machine learning is measured by how good it can solve a certain problem, with whatever means necessary. I'll argue that the distinction between machine learning and statistical inference is clear."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}]}, {"question": "What is Markov Chain Monte Carlo and why it matters", "positive_ctxs": [{"text": "Abstract. Markov chain Monte Carlo (MCMC) is a simulation technique that can be used to find the posterior distribution and to sample from it. Thus, it is used to fit a model and to draw samples from the joint posterior distribution of the model parameters. The software OpenBUGS and Stan are MCMC samplers."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In general, this integral will not be tractable analytically or symbolically and must be evaluated by numerical methods. Stochastic (random) or deterministic approximations may be used. Example stochastic methods are Markov Chain Monte Carlo and Monte Carlo sampling."}, {"text": "In contrast to traditional Markov chain Monte Carlo methods, the precision parameter of this class of interacting Markov chain Monte Carlo samplers is only related to the number of interacting Markov chain Monte Carlo samplers. These advanced particle methodologies belong to the class of Feynman-Kac particle models, also called Sequential Monte Carlo or particle filter methods in Bayesian inference and signal processing communities. Interacting Markov chain Monte Carlo methods can also be interpreted as a mutation-selection genetic particle algorithm with Markov chain Monte Carlo mutations."}, {"text": "There is no consensus on how Monte Carlo should be defined. For example, Ripley defines most probabilistic modeling as stochastic simulation, with Monte Carlo being reserved for Monte Carlo integration and Monte Carlo statistical tests. Sawilowsky distinguishes between a simulation, a Monte Carlo method, and a Monte Carlo simulation: a simulation is a fictitious representation of reality, a Monte Carlo method is a technique that can be used to solve a mathematical or statistical problem, and a Monte Carlo simulation uses repeated sampling to obtain the statistical properties of some phenomenon (or behavior)."}, {"text": "There is no consensus on how Monte Carlo should be defined. For example, Ripley defines most probabilistic modeling as stochastic simulation, with Monte Carlo being reserved for Monte Carlo integration and Monte Carlo statistical tests. Sawilowsky distinguishes between a simulation, a Monte Carlo method, and a Monte Carlo simulation: a simulation is a fictitious representation of reality, a Monte Carlo method is a technique that can be used to solve a mathematical or statistical problem, and a Monte Carlo simulation uses repeated sampling to obtain the statistical properties of some phenomenon (or behavior)."}, {"text": "In principle, any Markov chain Monte Carlo sampler can be turned into an interacting Markov chain Monte Carlo sampler. These interacting Markov chain Monte Carlo samplers can be interpreted as a way to run in parallel a sequence of Markov chain Monte Carlo samplers. For instance, interacting simulated annealing algorithms are based on independent Metropolis-Hastings moves interacting sequentially with a selection-resampling type mechanism."}, {"text": "The term \"particle filters\" was first coined in 1996 by Del Moral, and the term \"sequential Monte Carlo\" by Liu and Chen in 1998. Subset simulation and Monte Carlo splitting techniques are particular instances of genetic particle schemes and Feynman-Kac particle models equipped with Markov chain Monte Carlo mutation transitions"}, {"text": "Various other numerical methods based on fixed grid approximations, Markov Chain Monte Carlo techniques (MCMC), conventional linearization, extended Kalman filters, or determining the best linear system (in the expected cost-error sense) are unable to cope with large scale systems, unstable processes, or when the nonlinearities are not sufficiently smooth."}]}, {"question": "What statistical analysis should I use to compare two groups", "positive_ctxs": [{"text": "When comparing two groups, you need to decide whether to use a paired test. When comparing three or more groups, the term paired is not apt and the term repeated measures is used instead. Use an unpaired test to compare groups when the individual values are not paired or matched with one another."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Suppose that we want to compare two models: one with a normal distribution of y and one with a normal distribution of log(y). We should not directly compare the AIC values of the two models. Instead, we should transform the normal cumulative distribution function to first take the logarithm of y."}, {"text": "Simple comparisons compare one group mean with one other group mean. Compound comparisons typically compare two sets of groups means where one set has two or more groups (e.g., compare average group means of group A, B and C with group D). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels."}, {"text": "Simple comparisons compare one group mean with one other group mean. Compound comparisons typically compare two sets of groups means where one set has two or more groups (e.g., compare average group means of group A, B and C with group D). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels."}, {"text": "Simple comparisons compare one group mean with one other group mean. Compound comparisons typically compare two sets of groups means where one set has two or more groups (e.g., compare average group means of group A, B and C with group D). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels."}, {"text": "Simple comparisons compare one group mean with one other group mean. Compound comparisons typically compare two sets of groups means where one set has two or more groups (e.g., compare average group means of group A, B and C with group D). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels."}, {"text": "In a randomized trial with two treatment groups, group sequential testing may for example be conducted in the following manner: After n subjects in each group are available an interim analysis is conducted. A statistical test is performed to compare the two groups and if the null hypothesis is rejected the trial is terminated; otherwise, the trial continues, another n subjects per group are recruited, and the statistical test is performed again, including all subjects. If the null is rejected, the trial is terminated, and otherwise it continues with periodic evaluations until a maximum number of interim analyses have been performed, at which point the last statistical test is conducted and the trial is discontinued."}, {"text": "In a randomized trial with two treatment groups, group sequential testing may for example be conducted in the following manner: After n subjects in each group are available an interim analysis is conducted. A statistical test is performed to compare the two groups and if the null hypothesis is rejected the trial is terminated; otherwise, the trial continues, another n subjects per group are recruited, and the statistical test is performed again, including all subjects. If the null is rejected, the trial is terminated, and otherwise it continues with periodic evaluations until a maximum number of interim analyses have been performed, at which point the last statistical test is conducted and the trial is discontinued."}]}, {"question": "What is multivariate Cox regression analysis", "positive_ctxs": [{"text": "The Cox (proportional hazards or PH) model (Cox, 1972) is the most commonly used multivariate approach for analysing survival time data in medical research. It is a survival analysis regression model, which describes the relation between the event incidence, as expressed by the hazard function and a set of covariates."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The term Cox regression model (omitting proportional hazards) is sometimes used to describe the extension of the Cox model to include time-dependent factors. However, this usage is potentially ambiguous since the Cox proportional hazards model can itself be described as a regression model."}, {"text": "The term Cox regression model (omitting proportional hazards) is sometimes used to describe the extension of the Cox model to include time-dependent factors. However, this usage is potentially ambiguous since the Cox proportional hazards model can itself be described as a regression model."}, {"text": "Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis."}, {"text": "Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis."}, {"text": "Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis."}, {"text": "Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable. The application of multivariate statistics is multivariate analysis."}, {"text": "Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable. The application of multivariate statistics is multivariate analysis."}]}, {"question": "What does cross validation reduce", "positive_ctxs": [{"text": "To reduce variability we perform multiple rounds of cross-validation with different subsets from the same data. We combine the validation results from these multiple rounds to come up with an estimate of the model's predictive performance. Cross-validation will give us a more accurate estimate of a model's performance."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Non-exhaustive cross validation methods do not compute all ways of splitting the original sample. Those methods are approximations of leave-p-out cross-validation."}, {"text": "Non-exhaustive cross validation methods do not compute all ways of splitting the original sample. Those methods are approximations of leave-p-out cross-validation."}, {"text": "Non-exhaustive cross validation methods do not compute all ways of splitting the original sample. Those methods are approximations of leave-p-out cross-validation."}, {"text": "The advantage of this method (over k-fold cross validation) is that the proportion of the training/validation split is not dependent on the number of iterations (i.e., the number of partitions). The disadvantage of this method is that some observations may never be selected in the validation subsample, whereas others may be selected more than once. In other words, validation subsets may overlap."}, {"text": "The advantage of this method (over k-fold cross validation) is that the proportion of the training/validation split is not dependent on the number of iterations (i.e., the number of partitions). The disadvantage of this method is that some observations may never be selected in the validation subsample, whereas others may be selected more than once. In other words, validation subsets may overlap."}, {"text": "The advantage of this method (over k-fold cross validation) is that the proportion of the training/validation split is not dependent on the number of iterations (i.e., the number of partitions). The disadvantage of this method is that some observations may never be selected in the validation subsample, whereas others may be selected more than once. In other words, validation subsets may overlap."}, {"text": "One by one, a set is selected as test set. Then, one by one, one of the remaining sets is used as a validation set and the other k - 2 sets are used as training sets until all possible combinations have been evaluated. Similar to the k*l-fold cross validation, the training set is used for model fitting and the validation set is used for model evaluation for each of the hyperparameter sets."}]}, {"question": "Is there a relation between Boltzmann machines and Markov random fields", "positive_ctxs": [{"text": "It is a Markov random field. It was translated from statistical physics for use in cognitive science. The Boltzmann machine is based on stochastic spin-glass model with an external field, i.e., a Sherrington\u2013Kirkpatrick model that is a stochastic Ising Model and applied to machine learning."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "a pair of nodes from each of the two groups of units (commonly referred to as the \"visible\" and \"hidden\" units respectively) may have a symmetric connection between them; and there are no connections between nodes within a group. By contrast, \"unrestricted\" Boltzmann machines may have connections between hidden units. This restriction allows for more efficient training algorithms than are available for the general class of Boltzmann machines, in particular the gradient-based contrastive divergence algorithm.Restricted Boltzmann machines can also be used in deep learning networks."}, {"text": "The units in Boltzmann machines are divided into two groups: visible units and hidden units. General Boltzmann machines allow connection between any units. However, learning is impractical using general Boltzmann Machines because the computational time is exponential to the size of the machine."}, {"text": "Several kinds of random fields exist, among them the Markov random field (MRF), Gibbs random field, conditional random field (CRF), and Gaussian random field. An MRF exhibits the Markov property"}, {"text": "Restricted Boltzmann machines (RBMs) are often used as a building block for multilayer learning architectures. An RBM can be represented by an undirected bipartite graph consisting of a group of binary hidden variables, a group of visible variables, and edges connecting the hidden and visible nodes. It is a special case of the more general Boltzmann machines with the constraint of no intra-node connections."}, {"text": "Restricted Boltzmann machines (RBMs) are often used as a building block for multilayer learning architectures. An RBM can be represented by an undirected bipartite graph consisting of a group of binary hidden variables, a group of visible variables, and edges connecting the hidden and visible nodes. It is a special case of the more general Boltzmann machines with the constraint of no intra-node connections."}, {"text": "Restricted Boltzmann machines (RBMs) are often used as a building block for multilayer learning architectures. An RBM can be represented by an undirected bipartite graph consisting of a group of binary hidden variables, a group of visible variables, and edges connecting the hidden and visible nodes. It is a special case of the more general Boltzmann machines with the constraint of no intra-node connections."}, {"text": "A Boltzmann machine is a type of stochastic neural network invented by Geoffrey Hinton and Terry Sejnowski in 1985. Boltzmann machines can be seen as the stochastic, generative counterpart of Hopfield nets. They are named after the Boltzmann distribution in statistical mechanics."}]}, {"question": "What is the difference between null and alternative hypothesis", "positive_ctxs": [{"text": "The null hypothesis is a general statement that states that there is no relationship between two phenomenons under consideration or that there is no association between two groups. An alternative hypothesis is a statement that describes that there is a relationship between two selected variables in a study."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The process of distinguishing between the null hypothesis and the alternative hypothesis is aided by considering two conceptual types of errors. The first type of error occurs when the null hypothesis is wrongly rejected. The second type of error occurs when the null hypothesis is wrongly not rejected."}, {"text": "The process of distinguishing between the null hypothesis and the alternative hypothesis is aided by considering two conceptual types of errors. The first type of error occurs when the null hypothesis is wrongly rejected. The second type of error occurs when the null hypothesis is wrongly not rejected."}, {"text": "The process of distinguishing between the null hypothesis and the alternative hypothesis is aided by considering two conceptual types of errors. The first type of error occurs when the null hypothesis is wrongly rejected. The second type of error occurs when the null hypothesis is wrongly not rejected."}, {"text": "The process of distinguishing between the null hypothesis and the alternative hypothesis is aided by considering two conceptual types of errors. The first type of error occurs when the null hypothesis is wrongly rejected. The second type of error occurs when the null hypothesis is wrongly not rejected."}, {"text": "The process of distinguishing between the null hypothesis and the alternative hypothesis is aided by considering two conceptual types of errors. The first type of error occurs when the null hypothesis is wrongly rejected. The second type of error occurs when the null hypothesis is wrongly not rejected."}, {"text": "The process of distinguishing between the null hypothesis and the alternative hypothesis is aided by considering two conceptual types of errors. The first type of error occurs when the null hypothesis is wrongly rejected. The second type of error occurs when the null hypothesis is wrongly not rejected."}, {"text": "The process of distinguishing between the null hypothesis and the alternative hypothesis is aided by considering two conceptual types of errors. The first type of error occurs when the null hypothesis is wrongly rejected. The second type of error occurs when the null hypothesis is wrongly not rejected."}]}, {"question": "What does Fourier mean", "positive_ctxs": [{"text": "In mathematics, a Fourier transform (FT) is a mathematical transform that decomposes a function (often a function of time, or a signal) into its constituent frequencies, such as the expression of a musical chord in terms of the volumes and frequencies of its constituent notes."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In particular, under most types of discrete Fourier transform, such as FFT and Hartley, the transform W of w will be a Gaussian white noise vector, too; that is, the n Fourier coefficients of w will be independent Gaussian variables with zero mean and the same variance"}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "; however, for many signals of interest the Fourier transform does not formally exist. Regardless, Parseval's Theorem tells us that we can re-write the average power as follows."}, {"text": "When the input function/waveform is periodic, the Fourier transform output is a Dirac comb function, modulated by a discrete sequence of finite-valued coefficients that are complex-valued in general. These are called Fourier series coefficients. The term Fourier series actually refers to the inverse Fourier transform, which is a sum of sinusoids at discrete frequencies, weighted by the Fourier series coefficients."}, {"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}, {"text": "Let X(f) be the Fourier transform of any function, x(t), whose samples at some interval, T, equal the x[n] sequence. Then the discrete-time Fourier transform (DTFT) is a Fourier series representation of a periodic summation of X(f):"}]}, {"question": "What is a probability distribution explain your answer", "positive_ctxs": [{"text": "A probability distribution is a statistical function that describes all the possible values and likelihoods that a random variable can take within a given range. These factors include the distribution's mean (average), standard deviation, skewness, and kurtosis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If there is a procedure for verifying whether the answer given by a Monte Carlo algorithm is correct, and the probability of a correct answer is bounded above zero, then with probability one running the algorithm repeatedly while testing the answers will eventually give a correct answer. Whether this process is a Las Vegas algorithm depends on whether halting with probability one is considered to satisfy the definition."}, {"text": "To answer an interventional question, such as \"What is the probability that it would rain, given that we wet the grass?\" the answer is governed by the post-intervention joint distribution function"}, {"text": "To answer an interventional question, such as \"What is the probability that it would rain, given that we wet the grass?\" the answer is governed by the post-intervention joint distribution function"}, {"text": "Now, assume (for example) that there are 5 green and 45 red marbles in the urn. Standing next to the urn, you close your eyes and draw 10 marbles without replacement. What is the probability that exactly 4 of the 10 are green?"}, {"text": "A probability distribution can be viewed as a partition of a set. One may then ask: if a set were partitioned randomly, what would the distribution of probabilities be? What would the expectation value of the mutual information be?"}, {"text": "What is the probability of winning the car given the player has picked door 1 and the host has opened door 3?The answer to the first question is 2/3, as is correctly shown by the \"simple\" solutions. But the answer to the second question is now different: the conditional probability the car is behind door 1 or door 2 given the host has opened door 3 (the door on the right) is 1/2. This is because Monty's preference for rightmost doors means that he opens door 3 if the car is behind door 1 (which it is originally with probability 1/3) or if the car is behind door 2 (also originally with probability 1/3)."}, {"text": "The question that we desire to answer is: \"what is the probability that a given document D belongs to a given class C?\" In other words, what is"}]}, {"question": "Which of the following is a difference between the T distribution and the standard normal Z distribution group of answer choices", "positive_ctxs": [{"text": "The t-distribution cannot be calculated without a known standard deviation, while the standard normal distribution can be."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "One of the most popular application of cumulative distribution function is standard normal table, also called the unit normal table or Z table, is the value of cumulative distribution function of the normal distribution. It is very useful to use Z-table not only for probabilities below a value which is the original application of cumulative distribution function, but also above and/or between values on standard normal distribution, and it was further extended to any normal distribution."}, {"text": "With two paired samples, we look at the distribution of the difference scores. In that case, s is the standard deviation of this distribution of difference scores. This creates the following relationship between the t-statistic to test for a difference in the means of the two groups and Cohen's d:"}, {"text": "The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:"}, {"text": "The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:"}, {"text": "The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:"}, {"text": "The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:"}, {"text": "The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:"}]}, {"question": "How do you interpret a stepwise regression analysis", "positive_ctxs": [{"text": "8:3417:13Suggested clip \u00b7 72 secondsStepwise regression procedures in SPSS (new, 2018) - YouTubeYouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "One of the main issues with stepwise regression is that it searches a large space of possible models. Hence it is prone to overfitting the data. In other words, stepwise regression will often fit much better in sample than it does on new out-of-sample data."}, {"text": "One of the main issues with stepwise regression is that it searches a large space of possible models. Hence it is prone to overfitting the data. In other words, stepwise regression will often fit much better in sample than it does on new out-of-sample data."}, {"text": "One of the main issues with stepwise regression is that it searches a large space of possible models. Hence it is prone to overfitting the data. In other words, stepwise regression will often fit much better in sample than it does on new out-of-sample data."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "By itself, a regression is simply a calculation using the data. In order to interpret the output of a regression as a meaningful statistical quantity that measures real-world relationships, researchers often rely on a number of classical assumptions."}, {"text": "By itself, a regression is simply a calculation using the data. In order to interpret the output of a regression as a meaningful statistical quantity that measures real-world relationships, researchers often rely on a number of classical assumptions."}]}, {"question": "What is hidden Markov in speech recognition", "positive_ctxs": [{"text": "Abstract. Hidden Markov Models (HMMs) provide a simple and effective frame- work for modelling time-varying spectral vector sequences. As a con- sequence, almost all present day large vocabulary continuous speech recognition (LVCSR) systems are based on HMMs."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Markov sources are commonly used in communication theory, as a model of a transmitter. Markov sources also occur in natural language processing, where they are used to represent hidden meaning in a text. Given the output of a Markov source, whose underlying Markov chain is unknown, the task of solving for the underlying chain is undertaken by the techniques of hidden Markov models, such as the Viterbi algorithm."}, {"text": "A hidden semi-Markov model (HSMM) is a statistical model with the same structure as a hidden Markov model except that the unobservable process is semi-Markov rather than Markov. This means that the probability of there being a change in the hidden state depends on the amount of time that has elapsed since entry into the current state. This is in contrast to hidden Markov models where there is a constant probability of changing state given survival in the state up to that time.For instance Sanson & Thomson (2001) modelled daily rainfall using a hidden semi-Markov model."}, {"text": "A Hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states. An HMM can be considered as the simplest dynamic Bayesian network. HMM models are widely used in speech recognition, for translating a time series of spoken words into text."}, {"text": "A Hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states. An HMM can be considered as the simplest dynamic Bayesian network. HMM models are widely used in speech recognition, for translating a time series of spoken words into text."}, {"text": "A Hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states. An HMM can be considered as the simplest dynamic Bayesian network. HMM models are widely used in speech recognition, for translating a time series of spoken words into text."}, {"text": "Hidden Markov models are generative models, in which the joint distribution of observations and hidden states, or equivalently both the prior distribution of hidden states (the transition probabilities) and conditional distribution of observations given states (the emission probabilities), is modeled. The above algorithms implicitly assume a uniform prior distribution over the transition probabilities. However, it is also possible to create hidden Markov models with other types of prior distributions."}, {"text": "A hidden Markov model is a Markov chain for which the state is only partially observable. In other words, observations are related to the state of the system, but they are typically insufficient to precisely determine the state. Several well-known algorithms for hidden Markov models exist."}]}, {"question": "What is an intuitive explanation of Gradient Boosting", "positive_ctxs": [{"text": "Gradient boosting is a type of machine learning boosting. It relies on the intuition that the best possible next model, when combined with previous models, minimizes the overall prediction error. The key idea is to set the target outcomes for this next model in order to minimize the error."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Several studies have highlighted the consequences of serial correlation and highlighted the small-cluster problem.In the framework of the Moulton factor, an intuitive explanation of the small cluster problem can be derived from the formula for the Moulton factor. Assume for simplicity that the number of observation per cluster is fixed at n. Below,"}, {"text": "Several studies have highlighted the consequences of serial correlation and highlighted the small-cluster problem.In the framework of the Moulton factor, an intuitive explanation of the small cluster problem can be derived from the formula for the Moulton factor. Assume for simplicity that the number of observation per cluster is fixed at n. Below,"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Also, several digital camera systems incorporate an automatic pixel binning function to improve image contrast.Binning is also used in machine learning to speed up the decision-tree boosting method for supervised classification and regression in algorithms such as Microsoft's LightGBM and scikit-learn's Histogram-based Gradient Boosting Classification Tree."}, {"text": "Abductive validation is the process of validating a given hypothesis through abductive reasoning. This can also be called reasoning through successive approximation. Under this principle, an explanation is valid if it is the best possible explanation of a set of known data."}, {"text": "An explanation of logistic regression can begin with an explanation of the standard logistic function. The logistic function is a sigmoid function, which takes any real input"}, {"text": "An explanation of logistic regression can begin with an explanation of the standard logistic function. The logistic function is a sigmoid function, which takes any real input"}]}, {"question": "What is the point of K means clustering", "positive_ctxs": [{"text": "k-means clustering is a method of vector quantization, originally from signal processing, that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A video camera can be seen as an approximation of a pinhole camera, which means that each point in the image is illuminated by some (normally one) point in the scene in front of the camera, usually by means of light that the scene point reflects from a light source. Each visible point in the scene is projected along a straight line that passes through the camera aperture and intersects the image plane. This means that at a specific point in time, each point in the image refers to a specific point in the scene."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "COBWEB: is an incremental clustering technique that keeps a hierarchical clustering model in the form of a classification tree. For each new point COBWEB descends the tree, updates the nodes along the way and looks for the best node to put the point on (using a category utility function)."}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}]}, {"question": "What is an advantage of backward chaining", "positive_ctxs": [{"text": "Depending on the skill being taught, backward chaining has a distinct advantage: It directly links the independent completion of a task to the immediate reward or reinforcement. Once the child can complete the last step independently, he or she can work on also completing the next-to-last step independently."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Forward chaining (or forward reasoning) is one of the two main methods of reasoning when using an inference engine and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systems, business and production rule systems. The opposite of forward chaining is backward chaining."}, {"text": "Inference engines work primarily in one of two modes either special rule or facts: forward chaining and backward chaining. Forward chaining starts with the known facts and asserts new facts. Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved."}, {"text": "Backward chaining is a bit less straight forward. In backward chaining the system looks at possible conclusions and works backward to see if they might be true. So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true."}, {"text": "Backward chaining is a bit less straight forward. In backward chaining the system looks at possible conclusions and works backward to see if they might be true. So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true."}, {"text": "Because the list of goals determines which rules are selected and used, this method is called goal-driven, in contrast to data-driven forward-chaining inference. The backward chaining approach is often employed by expert systems."}, {"text": "Because the data determines which rules are selected and used, this method is called data-driven, in contrast to goal-driven backward chaining inference. The forward chaining approach is often employed by expert systems, such as CLIPS."}, {"text": "Backward chaining (or backward reasoning) is an inference method described colloquially as working backward from the goal. It is used in automated theorem provers, inference engines, proof assistants, and other artificial intelligence applications.In game theory, researchers apply it to (simpler) subgames to find a solution to the game, in a process called backward induction. In chess, it is called retrograde analysis, and it is used to generate table bases for chess endgames for computer chess."}]}, {"question": "How do you visualize data effectively", "positive_ctxs": [{"text": "For more tips, read 10 Best Practices for Effective Dashboards.Choose the right charts and graphs for the job. Use predictable patterns for layouts. Tell data stories quickly with clear color cues. Incorporate contextual clues with shapes and designs. Strategically use size to visualize values.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Respondents worry that discussing funding or compensation would affect enrollment, effectively swaying participants from joining a research study. In most respondents\u2019 experience, most patients do not even ask for that information, so they assume that they do not have to discuss it with them and not jeopardize enrollment. When asked if information about funding or compensation would be important to provide to patients, one physician replied \u201c...certainly it may influence or bring up in their mind questions whether or not, you know, we want them to participate because we\u2019re gonna get paid for this, you know, budget dollar amount."}, {"text": "While historical data-group plots (bar charts, box plots, and violin plots) do not display the comparison, estimation plots add a second axis to explicitly visualize the effect size."}, {"text": "To visualize the two-dimensional case, one can imagine a person walking randomly around a city. The city is effectively infinite and arranged in a square grid of sidewalks. At every intersection, the person randomly chooses one of the four possible routes (including the one originally travelled from)."}]}, {"question": "Can you average categorical data", "positive_ctxs": [{"text": "By using these midpoints as the categorical response values, the researcher can easily calculate averages. Granted, this average will only be an estimate or a \u201cballpark\u201d value but is still extremely useful for the purpose of data analysis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Categorical data is the statistical data type consisting of categorical variables or of data that has been converted into that form, for example as grouped data. More specifically, categorical data may derive from observations made of qualitative data that are summarised as counts or cross tabulations, or from observations of quantitative data grouped within given intervals. Often, purely categorical data are summarised in the form of a contingency table."}, {"text": "Categorical data is the statistical data type consisting of categorical variables or of data that has been converted into that form, for example as grouped data. More specifically, categorical data may derive from observations made of qualitative data that are summarised as counts or cross tabulations, or from observations of quantitative data grouped within given intervals. Often, purely categorical data are summarised in the form of a contingency table."}, {"text": "Categorical data is the statistical data type consisting of categorical variables or of data that has been converted into that form, for example as grouped data. More specifically, categorical data may derive from observations made of qualitative data that are summarised as counts or cross tabulations, or from observations of quantitative data grouped within given intervals. Often, purely categorical data are summarised in the form of a contingency table."}, {"text": "If you look at [the cube plot], you can see that the choice of cage design did not make a lot of difference. \u2026 But, if you average the pairs of numbers for cage design, you get the [table below], which shows what the two other factors did. \u2026 It led to the extraordinary discovery that, in this particular application, the life of a bearing can be increased fivefold if the two factor(s) outer ring osculation and inner ring heat treatments are increased together.\""}, {"text": "For each unique value in the original categorical column, a new column is created in this method. These dummy variables are then filled up with zeros and ones (1 meaning TRUE, 0 meaning FALSE).Because this process creates multiple new variables, it is prone to creating a big p problem (too many predictors) if there are many unique values in the original column. Another downside of one-hot encoding is that it causes multicollinearity between the individual variables, which potentially reduces the model's accuracy.Also, if the categorical variable is an output variable, you may want to convert the values back into a categorical form in order to present them in your application.In practical usage this transformation is often directly performed by a function that takes categorical data as an input and outputs the corresponding dummy variables."}, {"text": "Additionally, data should always be categorical. Continuous data can first be converted to categorical data, with some loss of information. With both continuous and categorical data, it would be best to use logistic regression."}, {"text": "Given a set of data that contains information on medical patients your goal is to find correlation for a disease. Before you can start iterating through the data ensure that you have an understanding of the result, are you looking for patients who have the disease? Are there other diseases that can be the cause?"}]}, {"question": "What is back propagation in machine learning", "positive_ctxs": [{"text": "The Backpropagation algorithm looks for the minimum value of the error function in weight space using a technique called the delta rule or gradient descent. The weights that minimize the error function is then considered to be a solution to the learning problem."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Automatically learning the graph structure of a Bayesian network (BN) is a challenge pursued within machine learning. The basic idea goes back to a recovery algorithm developed by Rebane and Pearl and rests on the distinction between the three possible patterns allowed in a 3-node DAG:"}, {"text": "Automatically learning the graph structure of a Bayesian network (BN) is a challenge pursued within machine learning. The basic idea goes back to a recovery algorithm developed by Rebane and Pearl and rests on the distinction between the three possible patterns allowed in a 3-node DAG:"}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "What is a hypergeometric probability distribution", "positive_ctxs": [{"text": "In probability theory and statistics, the hypergeometric distribution is a discrete probability distribution that describes the probability of successes (random draws for which the object drawn has a specified feature) in draws, without replacement, from a finite population of size that contains exactly objects with"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A probability distribution whose sample space is one-dimensional (for example real numbers, list of labels, ordered labels or binary) is called univariate, while a distribution whose sample space is a vector space of dimension 2 or more is called multivariate. A univariate distribution gives the probabilities of a single random variable taking on various alternative values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector \u2013 a list of two or more random variables \u2013 taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution."}, {"text": "A probability distribution whose sample space is one-dimensional (for example real numbers, list of labels, ordered labels or binary) is called univariate, while a distribution whose sample space is a vector space of dimension 2 or more is called multivariate. A univariate distribution gives the probabilities of a single random variable taking on various alternative values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector \u2013 a list of two or more random variables \u2013 taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution."}, {"text": "A probability distribution whose sample space is one-dimensional (for example real numbers, list of labels, ordered labels or binary) is called univariate, while a distribution whose sample space is a vector space of dimension 2 or more is called multivariate. A univariate distribution gives the probabilities of a single random variable taking on various alternative values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector \u2013 a list of two or more random variables \u2013 taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution."}, {"text": "A probability distribution whose sample space is one-dimensional (for example real numbers, list of labels, ordered labels or binary) is called univariate, while a distribution whose sample space is a vector space of dimension 2 or more is called multivariate. A univariate distribution gives the probabilities of a single random variable taking on various alternative values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector \u2013 a list of two or more random variables \u2013 taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution."}, {"text": "The characteristic function is the Fourier transform of the probability density function. The characteristic function of the beta distribution is Kummer's confluent hypergeometric function (of the first kind):"}, {"text": "The test based on the hypergeometric distribution (hypergeometric test) is identical to the corresponding one-tailed version of Fisher's exact test. Reciprocally, the p-value of a two-sided Fisher's exact test can be calculated as the sum of two appropriate hypergeometric tests (for more information see)."}, {"text": "The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than n, the binomial distribution remains a good approximation, and is widely used."}]}, {"question": "What defines an outlier", "positive_ctxs": [{"text": "Definition of outliers. An outlier is an observation that lies an abnormal distance from other values in a random sample from a population."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "In statistics, an outlier is a data point that differs significantly from other observations. An outlier may be due to variability in the measurement or it may indicate experimental error; the latter are sometimes excluded from the data set. An outlier can cause serious problems in statistical analyses."}, {"text": "There is no rigid mathematical definition of what constitutes an outlier; determining whether or not an observation is an outlier is ultimately a subjective exercise. There are various methods of outlier detection. Some are graphical such as normal probability plots."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "HTTP defines methods (sometimes referred to as verbs, but nowhere in the specification does it mention verb, nor is OPTIONS or HEAD a verb) to indicate the desired action to be performed on the identified resource. What this resource represents, whether pre-existing data or data that is generated dynamically, depends on the implementation of the server. Often, the resource corresponds to a file or the output of an executable residing on the server."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The distance to the kth nearest neighbor can also be seen as a local density estimate and thus is also a popular outlier score in anomaly detection. The larger the distance to the k-NN, the lower the local density, the more likely the query point is an outlier. Although quite simple, this outlier model, along with another classic data mining method, local outlier factor, works quite well also in comparison to more recent and more complex approaches, according to a large scale experimental analysis."}]}, {"question": "How do you calculate an unbiased estimator", "positive_ctxs": [{"text": "A statistic d is called an unbiased estimator for a function of the parameter g(\u03b8) provided that for every choice of \u03b8, E\u03b8d(X) = g(\u03b8). Any estimator that not unbiased is called biased. The bias is the difference bd(\u03b8) = E\u03b8d(X) \u2212 g(\u03b8). We can assess the quality of an estimator by computing its mean square error."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "For example, the square root of the unbiased estimator of the population variance is not a mean-unbiased estimator of the population standard deviation: the square root of the unbiased sample variance, the corrected sample standard deviation, is biased. The bias depends both on the sampling distribution of the estimator and on the transform, and can be quite involved to calculate \u2013 see unbiased estimation of standard deviation for a discussion in this case."}, {"text": "For example, the square root of the unbiased estimator of the population variance is not a mean-unbiased estimator of the population standard deviation: the square root of the unbiased sample variance, the corrected sample standard deviation, is biased. The bias depends both on the sampling distribution of the estimator and on the transform, and can be quite involved to calculate \u2013 see unbiased estimation of standard deviation for a discussion in this case."}, {"text": "All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as in unbiased estimation of standard deviation); because an estimator is median-unbiased but not mean-unbiased (or the reverse); because a biased estimator gives a lower value of some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful."}, {"text": "All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as in unbiased estimation of standard deviation); because an estimator is median-unbiased but not mean-unbiased (or the reverse); because a biased estimator gives a lower value of some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful."}, {"text": "Among unbiased estimators, there often exists one with the lowest variance, called the minimum variance unbiased estimator (MVUE). In some cases an unbiased efficient estimator exists, which, in addition to having the lowest variance among unbiased estimators, satisfies the Cram\u00e9r\u2013Rao bound, which is an absolute lower bound on variance for statistics of a variable."}, {"text": "for all values of the parameter, then the estimator is called efficient.Equivalently, the estimator achieves equality in the Cram\u00e9r\u2013Rao inequality for all \u03b8. The Cram\u00e9r\u2013Rao lower bound is a lower bound of the variance of an unbiased estimator, representing the \"best\" an unbiased estimator can be."}]}, {"question": "What is standardized effect", "positive_ctxs": [{"text": "Standardized effect size statistics remove the units of the variables in the effect. The second type is simple. These statistics describe the size of the effect, but remain in the original units of the variables. So for example, say you're comparing the mean temperature of soil under two different conditions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Several standardized measures of effect have been proposed for ANOVA to summarize the strength of the association between a predictor(s) and the dependent variable or the overall standardized difference of the complete model. Standardized effect-size estimates facilitate comparison of findings across studies and disciplines. However, while standardized effect sizes are commonly used in much of the professional literature, a non-standardized measure of effect size that has immediately \"meaningful\" units may be preferable for reporting purposes."}, {"text": "Several standardized measures of effect have been proposed for ANOVA to summarize the strength of the association between a predictor(s) and the dependent variable or the overall standardized difference of the complete model. Standardized effect-size estimates facilitate comparison of findings across studies and disciplines. However, while standardized effect sizes are commonly used in much of the professional literature, a non-standardized measure of effect size that has immediately \"meaningful\" units may be preferable for reporting purposes."}, {"text": "Several standardized measures of effect have been proposed for ANOVA to summarize the strength of the association between a predictor(s) and the dependent variable or the overall standardized difference of the complete model. Standardized effect-size estimates facilitate comparison of findings across studies and disciplines. However, while standardized effect sizes are commonly used in much of the professional literature, a non-standardized measure of effect size that has immediately \"meaningful\" units may be preferable for reporting purposes."}, {"text": "Several standardized measures of effect have been proposed for ANOVA to summarize the strength of the association between a predictor(s) and the dependent variable or the overall standardized difference of the complete model. Standardized effect-size estimates facilitate comparison of findings across studies and disciplines. However, while standardized effect sizes are commonly used in much of the professional literature, a non-standardized measure of effect size that has immediately \"meaningful\" units may be preferable for reporting purposes."}, {"text": "is the common standard deviation of the outcomes in the treated and control groups. If constructed appropriately, a standardized effect size, along with the sample size, will completely determine the power. An unstandardized (direct) effect size is rarely sufficient to determine the power, as it does not contain information about the variability in the measurements."}, {"text": "A similar effect size estimator for multiple comparisons (e.g., ANOVA) is the \u03a8 root-mean-square standardized effect. This essentially presents the omnibus difference of the entire model adjusted by the root mean square, analogous to d or g. The simplest formula for \u03a8, suitable for one-way ANOVA, is"}]}, {"question": "What is non invertible matrix", "positive_ctxs": [{"text": "A square matrix that is not invertible is called singular or degenerate. A square matrix is singular if and only if its determinant is zero. Non-square matrices (m-by-n matrices for which m \u2260 n) do not have an inverse. However, in some cases such a matrix may have a left inverse or right inverse."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The theory of matrices over a ring is similar to that of matrices over a field, except that determinants exist only if the ring is commutative, and that a square matrix over a commutative ring is invertible only if its determinant has a multiplicative inverse in the ring."}, {"text": "To see this, consider the set of invertible square matrices of a given dimension over a given field. Here, it is straightforward to verify closure, associativity, and inclusion of identity (the identity matrix) and inverses. However, matrix multiplication is not commutative, which shows that this group is non-abelian."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Functions that have inverse functions are said to be invertible. A function is invertible if and only if it is a bijection."}, {"text": "The odds ratio has another unique property of being directly mathematically invertible whether analyzing the OR as either disease survival or disease onset incidence \u2013 where the OR for survival is direct reciprocal of 1/OR for risk. This is known as the 'invariance of the odds ratio'. In contrast, the relative risk does not possess this mathematical invertible property when studying disease survival vs. onset incidence."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}]}, {"question": "What is data frequency table", "positive_ctxs": [{"text": "A data set can also be presented by means of a data frequency table, a table in which each distinct value is listed in the first row and its frequency, which is the number of times the value appears in the data set, is listed below it in the second row."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, a frequency distribution is a list, table or graph that displays the frequency of various outcomes in a sample. Each entry in the table contains the frequency or count of the occurrences of values within a particular group or interval."}, {"text": "In statistics, a frequency distribution is a list, table or graph that displays the frequency of various outcomes in a sample. Each entry in the table contains the frequency or count of the occurrences of values within a particular group or interval."}, {"text": "A frequency distribution table is an arrangement of the values that one or more variables take in a sample. Each entry in the table contains the frequency or count of the occurrences of values within a particular group or interval, and in this way, the table summarizes the distribution of values in the sample. An example is shown below"}, {"text": "A frequency distribution table is an arrangement of the values that one or more variables take in a sample. Each entry in the table contains the frequency or count of the occurrences of values within a particular group or interval, and in this way, the table summarizes the distribution of values in the sample. An example is shown below"}, {"text": "If successful, the known equation is enough to report the frequency distribution and a table of data will not be required. Further, the equation helps interpolation and extrapolation. However, care should be taken with extrapolating a cumulative frequency distribution, because this may be a source of errors."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Descriptive statistics describe a sample or population. They can be part of exploratory data analysis.The appropriate statistic depends on the level of measurement. For nominal variables, a frequency table and a listing of the mode(s) is sufficient."}]}, {"question": "What is factor analysis in multivariate analysis", "positive_ctxs": [{"text": "Factor Analysis (FA) is an exploratory technique applied to a set of outcome variables that seeks to find the underlying factors (or subsets of variables) from which the observed variables were generated."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In factor analysis and latent trait analysis the latent variables are treated as continuous normally distributed variables, and in latent profile analysis and latent class analysis as from a multinomial distribution. The manifest variables in factor analysis and latent profile analysis are continuous and in most cases, their conditional distribution given the latent variables is assumed to be normal. In latent trait analysis and latent class analysis, the manifest variables are discrete."}, {"text": "In factor analysis and latent trait analysis the latent variables are treated as continuous normally distributed variables, and in latent profile analysis and latent class analysis as from a multinomial distribution. The manifest variables in factor analysis and latent profile analysis are continuous and in most cases, their conditional distribution given the latent variables is assumed to be normal. In latent trait analysis and latent class analysis, the manifest variables are discrete."}, {"text": "Sparse principal component analysis (sparse PCA) is a specialised technique used in statistical analysis and, in particular, in the analysis of multivariate data sets. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by introducing sparsity structures to the input variables."}, {"text": "Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made."}, {"text": "Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made."}, {"text": "Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made."}, {"text": "Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made."}]}, {"question": "How many types of mean in statistics", "positive_ctxs": [{"text": "There are different types of mean, viz. arithmetic mean, weighted mean, geometric mean (GM) and harmonic mean (HM). If mentioned without an adjective (as mean), it generally refers to the arithmetic mean."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In the business world, descriptive statistics provides a useful summary of many types of data. For example, investors and brokers may use a historical account of return behaviour by performing empirical and analytical analyses on their investments in order to make better investing decisions in the future."}, {"text": "While many censuses were conducted in antiquity, there are few population statistics that survive. One example though can be found in the Bible, in chapter 1 of the Book of Numbers. Not only are the statistics given, but the method used to compile those statistics is also described."}, {"text": "These intimacies consist of grooming and various forms of body contact. Stress responses, including increased heart rates, usually decrease after these reconciliatory signals. Different types of primates, as well as many other species who live in groups, display different types of conciliatory behavior."}, {"text": "The idea of least-squares analysis was also independently formulated by the American Robert Adrain in 1808. In the next two centuries workers in the theory of errors and in statistics found many different ways of implementing least squares."}, {"text": "The idea of least-squares analysis was also independently formulated by the American Robert Adrain in 1808. In the next two centuries workers in the theory of errors and in statistics found many different ways of implementing least squares."}, {"text": "Multivariate statistics concerns understanding the different aims and background of each of the different forms of multivariate analysis, and how they relate to each other. The practical application of multivariate statistics to a particular problem may involve several types of univariate and multivariate analyses in order to understand the relationships between variables and their relevance to the problem being studied."}, {"text": "Multivariate statistics concerns understanding the different aims and background of each of the different forms of multivariate analysis, and how they relate to each other. The practical application of multivariate statistics to a particular problem may involve several types of univariate and multivariate analyses in order to understand the relationships between variables and their relevance to the problem being studied."}]}, {"question": "Why we take SSE sum of square error and RMSE root mean square error", "positive_ctxs": [{"text": "The squared error has some nice properties: It is symmetrical. That means, if the actual value is and you predict or , you get the same error measure."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "In an analogy to standard deviation, taking the square root of MSE yields the root-mean-square error or root-mean-square deviation (RMSE or RMSD), which has the same units as the quantity being estimated; for an unbiased estimator, the RMSE is the square root of the variance, known as the standard error."}, {"text": "In an analogy to standard deviation, taking the square root of MSE yields the root-mean-square error or root-mean-square deviation (RMSE or RMSD), which has the same units as the quantity being estimated; for an unbiased estimator, the RMSE is the square root of the variance, known as the standard error."}]}, {"question": "What is low shot learning", "positive_ctxs": [{"text": "Low-shot learning deep learning is based on the concept that reliable algorithms can be created to make predictions from minimalist datasets."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "President Kennedy was shot dead during the parade. and The convict escaped on July 15th. We could translate the sentence The convict was shot dead during the parade."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "For example, the shooting percentage in basketball is a descriptive statistic that summarizes the performance of a player or a team. This number is the number of shots made divided by the number of shots taken. For example, a player who shoots 33% is making approximately one shot in every three."}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}]}, {"question": "Why weight is used in neural network", "positive_ctxs": [{"text": "Weights(Parameters) \u2014 A weight represent the strength of the connection between units. If the weight from node 1 to node 2 has greater magnitude, it means that neuron 1 has greater influence over neuron 2. A weight brings down the importance of the input value."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence."}, {"text": "Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence."}, {"text": "Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence."}, {"text": "Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence."}, {"text": "Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence."}, {"text": "The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks.Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:"}, {"text": "The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks.Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:"}]}, {"question": "Can random forest be used for multiclass classification", "positive_ctxs": [{"text": "Multi-class Classification using Decision Tree, Random Forest and Extra Trees Algorithm in Python: An End-To-End Data Science Recipe \u2014 016. a) Different types of Machine Learning problems. i) How to implement Decision Tree, Random Forest and Extra Tree Algorithms for Multiclass Classification in Python."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Classification can be thought of as two separate problems \u2013 binary classification and multiclass classification. In binary classification, a better understood task, only two classes are involved, whereas multiclass classification involves assigning an object to one of several classes. Since many classification methods have been developed specifically for binary classification, multiclass classification often requires the combined use of multiple binary classifiers."}, {"text": "Classification can be thought of as two separate problems \u2013 binary classification and multiclass classification. In binary classification, a better understood task, only two classes are involved, whereas multiclass classification involves assigning an object to one of several classes. Since many classification methods have been developed specifically for binary classification, multiclass classification often requires the combined use of multiple binary classifiers."}, {"text": "Classification can be thought of as two separate problems \u2013 binary classification and multiclass classification. In binary classification, a better understood task, only two classes are involved, whereas multiclass classification involves assigning an object to one of several classes. Since many classification methods have been developed specifically for binary classification, multiclass classification often requires the combined use of multiple binary classifiers."}, {"text": "Classification can be thought of as two separate problems \u2013 binary classification and multiclass classification. In binary classification, a better understood task, only two classes are involved, whereas multiclass classification involves assigning an object to one of several classes. Since many classification methods have been developed specifically for binary classification, multiclass classification often requires the combined use of multiple binary classifiers."}, {"text": "Classification can be thought of as two separate problems \u2013 binary classification and multiclass classification. In binary classification, a better understood task, only two classes are involved, whereas multiclass classification involves assigning an object to one of several classes. Since many classification methods have been developed specifically for binary classification, multiclass classification often requires the combined use of multiple binary classifiers."}, {"text": "Folding activation functions are extensively used in the pooling layers in convolutional neural networks, and in output layers of multiclass classification networks. These activations perform aggregation over the inputs, such as taking the mean, minimum or maximum. In multiclass classification the softmax activation is often used."}, {"text": "is to fit a random forest to the data. During the fitting process the out-of-bag error for each data point is recorded and averaged over the forest (errors on an independent test set can be substituted if bagging is not used during training)."}]}, {"question": "What are decision trees commonly used for", "positive_ctxs": [{"text": "Decision trees are commonly used in operations research, specifically in decision analysis, to help identify a strategy most likely to reach a goal, but are also a popular tool in machine learning."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "End nodes \u2013 typically represented by trianglesDecision trees are commonly used in operations research and operations management. If, in practice, decisions have to be taken online with no recall under incomplete knowledge, a decision tree should be paralleled by a probability model as a best choice model or online selection model algorithm. Another use of decision trees is as a descriptive means for calculating conditional probabilities."}, {"text": "End nodes \u2013 typically represented by trianglesDecision trees are commonly used in operations research and operations management. If, in practice, decisions have to be taken online with no recall under incomplete knowledge, a decision tree should be paralleled by a probability model as a best choice model or online selection model algorithm. Another use of decision trees is as a descriptive means for calculating conditional probabilities."}, {"text": "End nodes \u2013 typically represented by trianglesDecision trees are commonly used in operations research and operations management. If, in practice, decisions have to be taken online with no recall under incomplete knowledge, a decision tree should be paralleled by a probability model as a best choice model or online selection model algorithm. Another use of decision trees is as a descriptive means for calculating conditional probabilities."}, {"text": "End nodes \u2013 typically represented by trianglesDecision trees are commonly used in operations research and operations management. If, in practice, decisions have to be taken online with no recall under incomplete knowledge, a decision tree should be paralleled by a probability model as a best choice model or online selection model algorithm. Another use of decision trees is as a descriptive means for calculating conditional probabilities."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity.In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data (but the resulting classification tree can be an input for decision making)."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity.In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data (but the resulting classification tree can be an input for decision making)."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}]}, {"question": "What is the null hypothesis for Heteroskedasticity", "positive_ctxs": [{"text": "This is the basis of the Breusch\u2013Pagan test. It is a chi-squared test: the test statistic is distributed n\u03c72 with k degrees of freedom. If the test statistic has a p-value below an appropriate threshold (e.g. p < 0.05) then the null hypothesis of homoskedasticity is rejected and heteroskedasticity assumed."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The null hypothesis is that there is no association between the treatment and the outcome. More precisely, the null hypothesis is"}, {"text": "The null hypothesis is that there is no association between the treatment and the outcome. More precisely, the null hypothesis is"}, {"text": "The confidence level should indicate the likelihood that much more and better data would still be able to exclude the null hypothesis on the same side.The concept of a null hypothesis is used differently in two approaches to statistical inference. In the significance testing approach of Ronald Fisher, a null hypothesis is rejected if the observed data is significantly unlikely to have occurred if the null hypothesis were true. In this case, the null hypothesis is rejected and an alternative hypothesis is accepted in its place."}, {"text": "The statistical power of a test is the probability that it correctly rejects the null hypothesis when the null hypothesis is false."}, {"text": "The statistical power of a test is the probability that it correctly rejects the null hypothesis when the null hypothesis is false."}, {"text": "The statistical power of a test is the probability that it correctly rejects the null hypothesis when the null hypothesis is false."}, {"text": "The statistical power of a test is the probability that it correctly rejects the null hypothesis when the null hypothesis is false."}]}, {"question": "How do you deal with confounders within a statistical study", "positive_ctxs": [{"text": "There are various ways to modify a study design to actively exclude or control confounding variables (3) including Randomization, Restriction and Matching. In randomization the random assignment of study subjects to exposure categories to breaking any links between exposure and confounders."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Case-control studies assign confounders to both groups, cases and controls, equally. For example, if somebody wanted to study the cause of myocardial infarct and thinks that the age is a probable confounding variable, each 67-year-old infarct patient will be matched with a healthy 67-year-old \"control\" person. In case-control studies, matched variables most often are the age and sex."}, {"text": "\"You cannot legitimately test a hypothesis on the same data that first suggested that hypothesis. Once you have a hypothesis, design a study to search specifically for the effect you now think is there. If the result of this test is statistically significant, you have real evidence at last.\""}, {"text": "Therefore, seeing this advertisement could lead people astray to start smoking because of its induced appeal. In a study by Slovic et al. (2005), he released a survey to smokers in which he asked \u201cIf you had it to do all over again, would you start smoking?\u201d and more than 85% of adult smokers and about 80% of young smokers (between the ages of 14-22) answered \u201cNo.\u201d He found that most smokers, especially those that start at a younger age, do not take the time and think about how their future selves will perceive the risks associated with smoking."}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}]}, {"question": "How do you know if its a lower or upper tailed test", "positive_ctxs": [{"text": "In an upper-tailed test the decision rule has investigators reject H0 if the test statistic is larger than the critical value. In a lower-tailed test the decision rule has investigators reject H0 if the test statistic is smaller than the critical value."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Suppose the police officers then stop a driver at random to administer a breathalyzer test. It indicates that the driver is drunk. We assume you do not know anything else about them."}, {"text": "For example, if a tighter bound on the distribution is desired on the upper portion of the support, a higher rate of violation can be allowed at the upper portion of the support at the expense of having a lower rate of violation, and thus a looser bound, for the lower portion of the support."}, {"text": "The extreme value theorem of Karl Weierstrass states that a continuous real-valued function on a compact set attains its maximum and minimum value. More generally, a lower semi-continuous function on a compact set attains its minimum; an upper semi-continuous function on a compact set attains its maximum point or view."}, {"text": "If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute."}, {"text": "The subscript gives the symbol for a bound variable (i in this case), called the \"index of multiplication\", together with its lower bound (1), whereas the superscript (here 4) gives its upper bound. The lower and upper bound are expressions denoting integers. The factors of the product are obtained by taking the expression following the product operator, with successive integer values substituted for the index of multiplication, starting from the lower bound and incremented by 1 up to (and including) the upper bound."}]}, {"question": "What is Taguchi quality loss function", "positive_ctxs": [{"text": "The quality loss function as defined by Taguchi is the loss imparted to the society by the product from the time the product is designed to the time it is shipped to the customer. In fact, he defined quality as the conformity around a target value with a lower standard deviation in the outputs."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Through his concept of the quality loss function, Taguchi explained that from the customer's point of view this drop of quality is not sudden. The customer experiences a loss of quality the moment product specification deviates from the 'target value'. This 'loss' is depicted by a quality loss function and it follows a parabolic curve mathematically given by L = k(y\u2013m)2, where m is the theoretical 'target value' or 'mean value' and y is the actual size of the product, k is a constant and L is the loss."}, {"text": "The Taguchi loss function is graphical depiction of loss developed by the Japanese business statistician Genichi Taguchi to describe a phenomenon affecting the value of products produced by a company. Praised by Dr. W. Edwards Deming (the business guru of the 1980s American quality movement), it made clear the concept that quality does not suddenly plummet when, for instance, a machinist exceeds a rigid blueprint tolerance. Instead 'loss' in value progressively increases as variation increases from the intended condition."}, {"text": "The most common loss function for regression is the square loss function (also known as the L2-norm). This familiar loss function is used in Ordinary Least Squares regression."}, {"text": "There is a lot of flexibility allowed in the choice of loss function. As long as the loss function is monotonic and continuously differentiable, the classifier is always driven toward purer solutions. Zhang (2004) provides a loss function based on least squares, a modified Huber loss function:"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}, {"text": "As the loss is convex the optimum solution lies at gradient zero. The gradient of the loss function is (using Denominator layout convention):"}]}, {"question": "What is the mode of a continuous random variable", "positive_ctxs": [{"text": "A mode of a continuous probability distribution is often considered to be any value x at which its probability density function has a locally maximum value, so any peak is a mode. In symmetric unimodal distributions, such as the normal distribution, the mean (if defined), median and mode all coincide."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A mixed random variable is a random variable whose cumulative distribution function is neither piecewise-constant (a discrete random variable) nor everywhere-continuous. It can be realized as the sum of a discrete random variable and a continuous random variable; in which case the CDF will be the weighted average of the CDFs of the component variables.An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, X = \u22121; otherwise X = the value of the spinner as in the preceding example."}, {"text": "A mixed random variable is a random variable whose cumulative distribution function is neither piecewise-constant (a discrete random variable) nor everywhere-continuous. It can be realized as the sum of a discrete random variable and a continuous random variable; in which case the CDF will be the weighted average of the CDFs of the component variables.An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, X = \u22121; otherwise X = the value of the spinner as in the preceding example."}, {"text": "In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. In other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0 (since there are an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would equal one sample compared to the other sample."}, {"text": "In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. In other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0 (since there are an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would equal one sample compared to the other sample."}, {"text": "In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. In other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0 (since there are an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would equal one sample compared to the other sample."}, {"text": "Expected score is the expected value of the scoring rule over all possible values of the target variable. For example, for a continuous random variable we have"}, {"text": "In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution."}]}, {"question": "How exactly does max pooling create translation invariance", "positive_ctxs": [{"text": "Achieving translation invariance in Convolutional NNs: Then the max pooling layer takes the output from the convolutional layer and reduces its resolution and complexity. It does so by outputting only the max value from a grid.So the information about the exact position of the max value in the grid is discarded."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In addition to max pooling, pooling units can use other functions, such as average pooling or \u21132-norm pooling. Average pooling was often used historically but has recently fallen out of favor compared to max pooling, which performs better in practice.Due to the aggressive reduction in the size of the representation, there is a recent trend towards using smaller filters or discarding pooling layers altogether."}, {"text": "In addition to max pooling, pooling units can use other functions, such as average pooling or \u21132-norm pooling. Average pooling was often used historically but has recently fallen out of favor compared to max pooling, which performs better in practice.Due to the aggressive reduction in the size of the representation, there is a recent trend towards using smaller filters or discarding pooling layers altogether."}, {"text": "In addition to max pooling, pooling units can use other functions, such as average pooling or \u21132-norm pooling. Average pooling was often used historically but has recently fallen out of favor compared to max pooling, which performs better in practice.Due to the aggressive reduction in the size of the representation, there is a recent trend towards using smaller filters or discarding pooling layers altogether."}, {"text": "In addition to max pooling, pooling units can use other functions, such as average pooling or \u21132-norm pooling. Average pooling was often used historically but has recently fallen out of favor compared to max pooling, which performs better in practice.Due to the aggressive reduction in the size of the representation, there is a recent trend towards using smaller filters or discarding pooling layers altogether."}, {"text": "In addition to max pooling, pooling units can use other functions, such as average pooling or \u21132-norm pooling. Average pooling was often used historically but has recently fallen out of favor compared to max pooling, which performs better in practice.Due to the aggressive reduction in the size of the representation, there is a recent trend towards using smaller filters or discarding pooling layers altogether."}, {"text": "In addition to max pooling, pooling units can use other functions, such as average pooling or \u21132-norm pooling. Average pooling was often used historically but has recently fallen out of favor compared to max pooling, which performs better in practice.Due to the aggressive reduction in the size of the representation, there is a recent trend towards using smaller filters or discarding pooling layers altogether."}, {"text": "In addition to max pooling, pooling units can use other functions, such as average pooling or \u21132-norm pooling. Average pooling was often used historically but has recently fallen out of favor compared to max pooling, which performs better in practice.Due to the aggressive reduction in the size of the representation, there is a recent trend towards using smaller filters or discarding pooling layers altogether."}]}, {"question": "How do you handle an unbalanced data set", "positive_ctxs": [{"text": "7 Techniques to Handle Imbalanced DataUse the right evaluation metrics. Resample the training set. Use K-fold Cross-Validation in the right way. Ensemble different resampled datasets. Resample with different ratios. Cluster the abundant class. Design your own models."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Given a set of data that contains information on medical patients your goal is to find correlation for a disease. Before you can start iterating through the data ensure that you have an understanding of the result, are you looking for patients who have the disease? Are there other diseases that can be the cause?"}, {"text": "In statistics, the range of a set of data is the difference between the largest and smallest values. It can give you a rough idea of how the outcome of the data set will be before you look at it actually"}, {"text": "For single-factor (one-way) ANOVA, the adjustment for unbalanced data is easy, but the unbalanced analysis lacks both robustness and power. For more complex designs the lack of balance leads to further complications. \"The orthogonality property of main effects and interactions present in balanced data does not carry over to the unbalanced case."}, {"text": "For single-factor (one-way) ANOVA, the adjustment for unbalanced data is easy, but the unbalanced analysis lacks both robustness and power. For more complex designs the lack of balance leads to further complications. \"The orthogonality property of main effects and interactions present in balanced data does not carry over to the unbalanced case."}, {"text": "For single-factor (one-way) ANOVA, the adjustment for unbalanced data is easy, but the unbalanced analysis lacks both robustness and power. For more complex designs the lack of balance leads to further complications. \"The orthogonality property of main effects and interactions present in balanced data does not carry over to the unbalanced case."}]}, {"question": "What does standard deviation of the mean represent", "positive_ctxs": [{"text": "The standard deviation (SD) measures the amount of variability, or dispersion, from the individual data values to the mean, while the standard error of the mean (SEM) measures how far the sample mean of the data is likely to be from the true population mean. SD is the dispersion of individual data values."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Often, we want some information about the precision of the mean we obtained. We can obtain this by determining the standard deviation of the sampled mean. Assuming statistical independence of the values in the sample, the standard deviation of the mean is related to the standard deviation of the distribution by:"}, {"text": "Often, we want some information about the precision of the mean we obtained. We can obtain this by determining the standard deviation of the sampled mean. Assuming statistical independence of the values in the sample, the standard deviation of the mean is related to the standard deviation of the distribution by:"}, {"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}]}, {"question": "How do you handle an imbalanced data set", "positive_ctxs": [{"text": "How to Handle Imbalanced DatasetChange the evaluation matrix. If we apply the wrong evaluation matrix on the imbalanced dataset, it can give us misleading results. Resample the dataset. Resample means to change the distribution of the imbalance classes in the dataset. Change the algorithm and approach to the problem."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Suppose, for example, you have a very imbalanced validation set made of 100 elements, 95 of which are positive elements, and only 5 are negative elements (as explained in Tip 5). And suppose also you made some mistakes in designing and training your machine learning classifier, and now you have an algorithm which always predicts positive. Imagine that you are not aware of this issue."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Given a set of data that contains information on medical patients your goal is to find correlation for a disease. Before you can start iterating through the data ensure that you have an understanding of the result, are you looking for patients who have the disease? Are there other diseases that can be the cause?"}, {"text": "In statistics, the range of a set of data is the difference between the largest and smallest values. It can give you a rough idea of how the outcome of the data set will be before you look at it actually"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}]}, {"question": "What is the use of cumulative distribution function", "positive_ctxs": [{"text": "The cumulative distribution function (CDF) calculates the cumulative probability for a given x-value. Use the CDF to determine the probability that a random observation that is taken from the population will be less than or equal to a certain value."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "One of the most popular application of cumulative distribution function is standard normal table, also called the unit normal table or Z table, is the value of cumulative distribution function of the normal distribution. It is very useful to use Z-table not only for probabilities below a value which is the original application of cumulative distribution function, but also above and/or between values on standard normal distribution, and it was further extended to any normal distribution."}, {"text": "The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution. A number of results exist to quantify the rate of convergence of the empirical distribution function to the underlying cumulative distribution function."}, {"text": "The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution, according to the Glivenko\u2013Cantelli theorem. A number of results exist to quantify the rate of convergence of the empirical distribution function to the underlying cumulative distribution function."}, {"text": "The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:"}, {"text": "The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:"}, {"text": "The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:"}, {"text": "The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function:"}]}, {"question": "How do you find the expected value example", "positive_ctxs": [{"text": "So, for example, if our random variable were the number obtained by rolling a fair 3-sided die, the expected value would be (1 * 1/3) + (2 * 1/3) + (3 * 1/3) = 2."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}]}, {"question": "What is decision theory in statistics", "positive_ctxs": [{"text": "Decision theory is the science of making optimal decisions in the face of uncertainty. Statistical decision theory is concerned with the making of decisions when in the presence of statistical knowledge (data) which sheds light on some of the uncertainties involved in the decision problem."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In decision theory, a decision rule is a function which maps an observation to an appropriate action. Decision rules play an important role in the theory of statistics and economics, and are closely related to the concept of a strategy in game theory."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Thus, to have a clear picture of info-gap's modus operandi and its role and place in decision theory and robust optimization, it is imperative to examine it within this context. In other words, it is necessary to establish info-gap's relation to classical decision theory and robust optimization."}]}, {"question": "What does Akaike information criterion mean", "positive_ctxs": [{"text": "The Akaike information criterion (AIC) is a mathematical method for evaluating how well a model fits the data it was generated from. In statistics, AIC is used to compare different possible models and determine which one is the best fit for the data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The Akaike information criterion was formulated by the statistician Hirotugu Akaike. It was originally named \"an information criterion\". It was first announced in English by Akaike at a 1971 symposium; the proceedings of the symposium were published in 1973."}, {"text": "Akaike information criterion and Schwarz criterion are both used for model selection. Generally when comparing two alternative models, smaller values of one of these criteria will indicate a better model."}, {"text": "Akaike information criterion and Schwarz criterion are both used for model selection. Generally when comparing two alternative models, smaller values of one of these criteria will indicate a better model."}, {"text": "Below is a list of criteria for model selection. The most commonly used criteria are (i) the Akaike information criterion and (ii) the Bayes factor and/or the Bayesian information criterion (which to some extent approximates the Bayes factor)."}, {"text": "Finally, the tree is pruned.The basic LMT induction algorithm uses cross-validation to find a number of LogitBoost iterations that does not overfit the training data. A faster version has been proposed that uses the Akaike information criterion to control LogitBoost stopping."}, {"text": "However, the test can only be used when models are nested (meaning that a more complex model includes all of the effects of a simpler model). When testing non-nested models, comparisons between models can be made using the Akaike information criterion (AIC) or the Bayesian information criterion (BIC), among others."}, {"text": "Akaike information criterion (AIC) method of model selection, and a comparison with MML: Dowe, D.L. ; Gardner, S.; Oppy, G. (Dec 2007). Why Simplicity is no Problem for Bayesians\"."}]}, {"question": "How does LDA algorithm work", "positive_ctxs": [{"text": "Though the name is a mouthful, the concept behind this is very simple. To tell briefly, LDA imagines a fixed set of topics. Each topic represents a set of words. And the goal of LDA is to map all the documents to the topics in a way, such that the words in each document are mostly captured by those imaginary topics."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Support vector machine\u2014an algorithm that maximizes the margin between the decision hyperplane and the examples in the training set.Note: Despite its name, LDA does not belong to the class of discriminative models in this taxonomy. However, its name makes sense when we compare LDA to the other main linear dimensionality reduction algorithm: principal components analysis (PCA). LDA is a supervised learning algorithm that utilizes the labels of the data, while PCA is an unsupervised learning algorithm that ignores the labels."}, {"text": "For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is an incremental LDA algorithm, and this idea has been extensively studied over the last two decades. Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features."}, {"text": "For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is an incremental LDA algorithm, and this idea has been extensively studied over the last two decades. Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features."}, {"text": "For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is an incremental LDA algorithm, and this idea has been extensively studied over the last two decades. Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features."}, {"text": "For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is an incremental LDA algorithm, and this idea has been extensively studied over the last two decades. Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features."}, {"text": "For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is an incremental LDA algorithm, and this idea has been extensively studied over the last two decades. Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features."}, {"text": "How much does the ball cost?\" many subjects incorrectly answer $0.10. An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do."}]}, {"question": "What is validation machine learning", "positive_ctxs": [{"text": "Definition. In machine learning, model validation is referred to as the process where a trained model is evaluated with a testing data set. The testing data set is a separate portion of the same data set from which the training set is derived. Model validation is carried out after model training."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Content validation (also called face validity) checks how well the scale measures what is supposed to measured. Criterion validation checks how meaningful the scale criteria are relative to other possible criteria. Construct validation checks what underlying construct is being measured."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "What is a cost function in linear regression", "positive_ctxs": [{"text": "Cost function(J) of Linear Regression is the Root Mean Squared Error (RMSE) between predicted y value (pred) and true y value (y). Gradient Descent: To update \u03b81 and \u03b82 values in order to reduce Cost function (minimizing RMSE value) and achieving the best fit line the model uses Gradient Descent."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "The softmax function is used in various multiclass classification methods, such as multinomial logistic regression (also known as softmax regression) [1], multiclass linear discriminant analysis, naive Bayes classifiers, and artificial neural networks. Specifically, in multinomial logistic regression and linear discriminant analysis, the input to the function is the result of K distinct linear functions, and the predicted probability for the j'th class given a sample vector x and a weighting vector w is:"}, {"text": "The softmax function is used in various multiclass classification methods, such as multinomial logistic regression (also known as softmax regression) [1], multiclass linear discriminant analysis, naive Bayes classifiers, and artificial neural networks. Specifically, in multinomial logistic regression and linear discriminant analysis, the input to the function is the result of K distinct linear functions, and the predicted probability for the j'th class given a sample vector x and a weighting vector w is:"}]}, {"question": "What is vector space in machine learning", "positive_ctxs": [{"text": "Vector space model or term vector model is an algebraic model for representing text documents (and any objects, in general) as vectors of identifiers, such as, for example, index terms. The model is used to represent documents in an n-dimensional space. But a \u201cdocument\u201d can mean any object you're trying to model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": ".When the scalar field F is the real numbers R, the vector space is called a real vector space. When the scalar field is the complex numbers C, the vector space is called a complex vector space. These two cases are the ones used most often in engineering."}, {"text": "In machine learning, the radial basis function kernel, or RBF kernel, is a popular kernel function used in various kernelized learning algorithms. In particular, it is commonly used in support vector machine classification.The RBF kernel on two samples x and x', represented as feature vectors in some input space, is defined as"}, {"text": "NMT departs from phrase-based statistical approaches that use separately engineered subcomponents. Neural machine translation (NMT) is not a drastic step beyond what has been traditionally done in statistical machine translation (SMT). Its main departure is the use of vector representations (\"embeddings\", \"continuous space representations\") for words and internal states."}, {"text": "Version space learning is a logical approach to machine learning, specifically binary classification. Version space learning algorithms search a predefined space of hypotheses, viewed as a set of logical sentences. Formally, the hypothesis space is a disjunction"}, {"text": "the vector space of all possible outputs. Statistical learning theory takes the perspective that there is some unknown probability distribution over the product space"}, {"text": "Roughly, affine spaces are vector spaces whose origins are not specified. More precisely, an affine space is a set with a free transitive vector space action. In particular, a vector space is an affine space over itself, by the map"}, {"text": "The first generation of semantic space models is the vector space model for information retrieval. Such vector space models for words and their distributional data implemented in their simplest form results in a very sparse vector space of high dimensionality (cf. Reducing the number of dimensions using linear algebraic methods such as singular value decomposition then led to the introduction of latent semantic analysis in the late 1980s and the Random indexing approach for collecting word cooccurrence contexts."}]}, {"question": "What is pre pruning and post pruning in decision tree", "positive_ctxs": [{"text": "Decision Tree - Overfitting There are several approaches to avoiding overfitting in building decision trees. Pre-pruning that stop growing the tree earlier, before it perfectly classifies the training set. Post-pruning that allows the tree to perfectly classify the training set, and then post prune the tree."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The statistic-based pruning of leaves from the same and/or different levels of predictive model was proposed as a part of decision stream learning technique. Every group of leaves containing data samples similar according to the test statistics is merged into a new leaf, increasing the number of samples in nodes of trained model and reducing the tree width. The predictive model is growing till no improvements are achievable, considering different data recombinations, and resulting in deep directed acyclic graph architecture and statistically-significant data partition."}, {"text": "PIM Dense Mode (PIM-DM) uses dense multicast routing. It implicitly builds shortest-path trees by flooding multicast traffic domain wide, and then pruning back branches of the tree where no receivers are present. PIM-DM is straightforward to implement but generally has poor scaling properties."}, {"text": "pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall. W. Rosenbluth.The use of genetic particle algorithms in advanced signal processing and Bayesian inference is more recent. In January 1993, Genshiro Kitagawa developed a \"Monte Carlo filter\", a slightly modified version of this article appearing in 1996."}, {"text": "However, it is hard to tell when a tree algorithm should stop because it is impossible to tell if the addition of a single extra node will dramatically decrease error. This problem is known as the horizon effect. A common strategy is to grow the tree until each node contains a small number of instances then use pruning to remove nodes that do not provide additional information.Pruning should reduce the size of a learning tree without reducing predictive accuracy as measured by a cross-validation set."}, {"text": "In contrast to the bottom-up method, this method starts at the root of the tree. Following the structure below, a relevance check is carried out which decides whether a node is relevant for the classification of all n items or not. By pruning the tree at an inner node, it can happen that an entire sub-tree (regardless of its relevance) is dropped."}, {"text": "Rotation forest \u2013 in which every decision tree is trained by first applying principal component analysis (PCA) on a random subset of the input features.A special case of a decision tree is a decision list, which is a one-sided decision tree, so that every internal node has exactly 1 leaf node and exactly 1 internal node as a child (except for the bottommost node, whose only child is a single leaf node). While less expressive, decision lists are arguably easier to understand than general decision trees due to their added sparsity, permit non-greedy learning methods and monotonic constraints to be imposed.Notable decision tree algorithms include:"}, {"text": "Rotation forest \u2013 in which every decision tree is trained by first applying principal component analysis (PCA) on a random subset of the input features.A special case of a decision tree is a decision list, which is a one-sided decision tree, so that every internal node has exactly 1 leaf node and exactly 1 internal node as a child (except for the bottommost node, whose only child is a single leaf node). While less expressive, decision lists are arguably easier to understand than general decision trees due to their added sparsity, permit non-greedy learning methods and monotonic constraints to be imposed.Notable decision tree algorithms include:"}]}, {"question": "What are the correct steps of a machine learning process", "positive_ctxs": [{"text": "The 7 Steps of Machine Learning1 - Data Collection.2 - Data Preparation.3 - Choose a Model.4 - Train the Model.5 - Evaluate the Model.6 - Parameter Tuning.7 - Make Predictions.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "CCMs have also been applied to latent learning frameworks, where the learning problem is defined over a latent representation layer. Since the notion of a correct representation is inherently ill-defined, no gold-standard labeled data regarding the representation decision is available to the learner. Identifying the correct (or optimal) learning representation is viewed as a structured prediction process and therefore modeled as a CCM."}, {"text": "Grammar induction (or grammatical inference) is the process in machine learning of learning a formal grammar (usually as a collection of re-write rules or productions or alternatively as a finite state machine or automaton of some kind) from a set of observations, thus constructing a model which accounts for the characteristics of the observed objects. More generally, grammatical inference is that branch of machine learning where the instance space consists of discrete combinatorial objects such as strings, trees and graphs."}, {"text": "A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement. Formally, the steps are the integers or natural numbers, and the random process is a mapping of these to states."}, {"text": "In the centralized federated learning setting, a central server is used to orchestrate the different steps of the algorithms and coordinate all the participating nodes during the learning process. The server is responsible for the nodes selection at the beginning of the training process and for the aggregation of the received model updates. Since all the selected nodes have to send updates to a single entity, the server may become a bottleneck of the system."}, {"text": "In the centralized federated learning setting, a central server is used to orchestrate the different steps of the algorithms and coordinate all the participating nodes during the learning process. The server is responsible for the nodes selection at the beginning of the training process and for the aggregation of the received model updates. Since all the selected nodes have to send updates to a single entity, the server may become a bottleneck of the system."}, {"text": "Automated machine learning (AutoML) is the process of automating the process of applying machine learning to real-world problems. AutoML covers the complete pipeline from the raw dataset to the deployable machine learning model. AutoML was proposed as an artificial intelligence-based solution to the ever-growing challenge of applying machine learning."}, {"text": "Automated machine learning (AutoML) is the process of automating the process of applying machine learning to real-world problems. AutoML covers the complete pipeline from the raw dataset to the deployable machine learning model. AutoML was proposed as an artificial intelligence-based solution to the ever-growing challenge of applying machine learning."}]}, {"question": "What is Gamma in SVC", "positive_ctxs": [{"text": "gamma is a parameter for non linear hyperplanes. The higher the gamma value it tries to exactly fit the training data set gammas = [0.1, 1, 10, 100]for gamma in gammas: svc = svm.SVC(kernel='rbf', gamma=gamma).fit(X, y)"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "SVC is a similar method that also builds on kernel functions but is appropriate for unsupervised learning. It is considered a fundamental method in data science."}, {"text": "SVC is a similar method that also builds on kernel functions but is appropriate for unsupervised learning. It is considered a fundamental method in data science."}, {"text": "SVC is a similar method that also builds on kernel functions but is appropriate for unsupervised learning. It is considered a fundamental method in data science."}, {"text": "SVC is a similar method that also builds on kernel functions but is appropriate for unsupervised learning. It is considered a fundamental method in data science."}, {"text": "SVC is a similar method that also builds on kernel functions but is appropriate for unsupervised learning. It is considered a fundamental method in data science."}, {"text": "Gamma is the accepted term in broadcast engineering, and the professional film and television industry in general,. However there is confusion about:"}, {"text": "Gamma correction, or often simply gamma, is a nonlinear operation used to encode and decode luminance or tristimulus values in video or still image systems. Gamma correction is, in the simplest cases, defined by the following power-law expression:"}]}, {"question": "When should I use weighted kappa", "positive_ctxs": [{"text": "The weighted kappa is calculated using a predefined table of weights which measure the degree of disagreement between the two raters, the higher the disagreement the higher the weight."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The weighted kappa allows disagreements to be weighted differently and is especially useful when codes are ordered. Three matrices are involved, the matrix of observed scores, the matrix of expected scores based on chance agreement, and the weight matrix. Weight matrix cells located on the diagonal (upper-left to bottom-right) represent agreement and thus contain zeros."}, {"text": "The weighted kappa allows disagreements to be weighted differently and is especially useful when codes are ordered. Three matrices are involved, the matrix of observed scores, the matrix of expected scores based on chance agreement, and the weight matrix. Weight matrix cells located on the diagonal (upper-left to bottom-right) represent agreement and thus contain zeros."}, {"text": "The weighted kappa allows disagreements to be weighted differently and is especially useful when codes are ordered. Three matrices are involved, the matrix of observed scores, the matrix of expected scores based on chance agreement, and the weight matrix. Weight matrix cells located on the diagonal (upper-left to bottom-right) represent agreement and thus contain zeros."}, {"text": "are elements in the weight, observed, and expected matrices, respectively. When diagonal cells contain weights of 0 and all off-diagonal cells weights of 1, this formula produces the same value of kappa as the calculation given above."}, {"text": "are elements in the weight, observed, and expected matrices, respectively. When diagonal cells contain weights of 0 and all off-diagonal cells weights of 1, this formula produces the same value of kappa as the calculation given above."}, {"text": "are elements in the weight, observed, and expected matrices, respectively. When diagonal cells contain weights of 0 and all off-diagonal cells weights of 1, this formula produces the same value of kappa as the calculation given above."}, {"text": "When performing multiple sample contrasts or tests, the Type I error rate tends to become inflated, raising concerns about multiple comparisons."}]}, {"question": "Where do we use standard deviation and variance", "positive_ctxs": [{"text": "Taking the square root of the variance gives us the units used in the original scale and this is the standard deviation. Standard deviation is the measure of spread most commonly used in statistical practice when the mean is used to calculate central tendency. Thus, it measures spread around the mean."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "These are the critical values of the normal distribution with right tail probability. However, t-values are used when the sample size is below 30 and the standard deviation is unknown.When the variance is unknown, we must use a different estimator:"}, {"text": "These are the critical values of the normal distribution with right tail probability. However, t-values are used when the sample size is below 30 and the standard deviation is unknown.When the variance is unknown, we must use a different estimator:"}, {"text": "Small samples are somewhat more likely to underestimate the population standard deviation and have a mean that differs from the true population mean, and the Student t-distribution accounts for the probability of these events with somewhat heavier tails compared to a Gaussian. To estimate the standard error of a Student t-distribution it is sufficient to use the sample standard deviation \"s\" instead of \u03c3, and we could use this value to calculate confidence intervals."}, {"text": "Small samples are somewhat more likely to underestimate the population standard deviation and have a mean that differs from the true population mean, and the Student t-distribution accounts for the probability of these events with somewhat heavier tails compared to a Gaussian. To estimate the standard error of a Student t-distribution it is sufficient to use the sample standard deviation \"s\" instead of \u03c3, and we could use this value to calculate confidence intervals."}, {"text": "Small samples are somewhat more likely to underestimate the population standard deviation and have a mean that differs from the true population mean, and the Student t-distribution accounts for the probability of these events with somewhat heavier tails compared to a Gaussian. To estimate the standard error of a Student t-distribution it is sufficient to use the sample standard deviation \"s\" instead of \u03c3, and we could use this value to calculate confidence intervals."}, {"text": "Trends hint at interactions among factors or among observations. One rule of thumb: \"If the largest standard deviation is less than twice the smallest standard deviation, we can use methods based on the assumption of equal standard deviations and our results"}, {"text": "Trends hint at interactions among factors or among observations. One rule of thumb: \"If the largest standard deviation is less than twice the smallest standard deviation, we can use methods based on the assumption of equal standard deviations and our results"}]}, {"question": "What is the goal of a generative adversarial network GAN )", "positive_ctxs": [{"text": "The generator is a convolutional neural network and the discriminator is a deconvolutional neural network. The goal of the generator is to artificially manufacture outputs that could easily be mistaken for real data. The goal of the discriminator is to identify which outputs it receives have been artificially created."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proven useful for semi-supervised learning, fully supervised learning, and reinforcement learning.The core idea of a GAN is based on the \"indirect\" training through the discriminator, which itself is also being updated dynamically."}, {"text": "Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proven useful for semi-supervised learning, fully supervised learning, and reinforcement learning.The core idea of a GAN is based on the \"indirect\" training through the discriminator, which itself is also being updated dynamically."}, {"text": "Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for unsupervised learning, GANs have also proven useful for semi-supervised learning, fully supervised learning, and reinforcement learning.The core idea of a GAN is based on the \"indirect\" training through the discriminator, which itself is also being updated dynamically."}]}, {"question": "Can you split a multinomial logistic regression model into separate binary logistic regression models", "positive_ctxs": [{"text": "If the outcomes are mutually independent, then yes the method is valid. If the outcomes are mutually exclusive, then no, the method is not valid. It's easy to see why this is the case. If you have three binary models, then the sum of the outcomes do not necessarily sum to one."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is also possible to formulate multinomial logistic regression as a latent variable model, following the two-way latent variable model described for binary logistic regression. This formulation is common in the theory of discrete choice models, and makes it easier to compare multinomial logistic regression to the related multinomial probit model, as well as to extend it to more complex models."}, {"text": "It is also possible to formulate multinomial logistic regression as a latent variable model, following the two-way latent variable model described for binary logistic regression. This formulation is common in the theory of discrete choice models, and makes it easier to compare multinomial logistic regression to the related multinomial probit model, as well as to extend it to more complex models."}, {"text": "In a binary logistic regression model, the dependent variable has two levels (categorical). Outputs with more than two values are modeled by multinomial logistic regression and, if the multiple categories are ordered, by ordinal logistic regression (for example the proportional odds ordinal logistic model). The logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier."}, {"text": "In a binary logistic regression model, the dependent variable has two levels (categorical). Outputs with more than two values are modeled by multinomial logistic regression and, if the multiple categories are ordered, by ordinal logistic regression (for example the proportional odds ordinal logistic model). The logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier."}, {"text": "In a binary logistic regression model, the dependent variable has two levels (categorical). Outputs with more than two values are modeled by multinomial logistic regression and, if the multiple categories are ordered, by ordinal logistic regression (for example the proportional odds ordinal logistic model). The logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}]}, {"question": "What is correlation and autocorrelation", "positive_ctxs": [{"text": "Autocorrelation, also known as serial correlation, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations as a function of the time lag between them."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The traditional test for the presence of first-order autocorrelation is the Durbin\u2013Watson statistic or, if the explanatory variables include a lagged dependent variable, Durbin's h statistic. The Durbin-Watson can be linearly mapped however to the Pearson correlation between values and their lags. A more flexible test, covering autocorrelation of higher orders and applicable whether or not the regressors include lags of the dependent variable, is the Breusch\u2013Godfrey test."}, {"text": "The traditional test for the presence of first-order autocorrelation is the Durbin\u2013Watson statistic or, if the explanatory variables include a lagged dependent variable, Durbin's h statistic. The Durbin-Watson can be linearly mapped however to the Pearson correlation between values and their lags. A more flexible test, covering autocorrelation of higher orders and applicable whether or not the regressors include lags of the dependent variable, is the Breusch\u2013Godfrey test."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "In signal processing, the above definition is often used without the normalization, that is, without subtracting the mean and dividing by the variance. When the autocorrelation function is normalized by mean and variance, it is sometimes referred to as the autocorrelation coefficient or autocovariance function."}, {"text": "In signal processing, the above definition is often used without the normalization, that is, without subtracting the mean and dividing by the variance. When the autocorrelation function is normalized by mean and variance, it is sometimes referred to as the autocorrelation coefficient or autocovariance function."}, {"text": "Responses to nonzero autocorrelation include generalized least squares and the Newey\u2013West HAC estimator (Heteroskedasticity and Autocorrelation Consistent).In the estimation of a moving average model (MA), the autocorrelation function is used to determine the appropriate number of lagged error terms to be included. This is based on the fact that for an MA process of order q, we have"}, {"text": "Responses to nonzero autocorrelation include generalized least squares and the Newey\u2013West HAC estimator (Heteroskedasticity and Autocorrelation Consistent).In the estimation of a moving average model (MA), the autocorrelation function is used to determine the appropriate number of lagged error terms to be included. This is based on the fact that for an MA process of order q, we have"}]}, {"question": "What is the activation function used for", "positive_ctxs": [{"text": "Choosing the right Activation FunctionSigmoid functions and their combinations generally work better in the case of classifiers.Sigmoids and tanh functions are sometimes avoided due to the vanishing gradient problem.ReLU function is a general activation function and is used in most cases these days.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "is non-linear and differentiable (even if the ReLU is not in one point). A historically used activation function is the logistic function:"}, {"text": "is non-linear and differentiable (even if the ReLU is not in one point). A historically used activation function is the logistic function:"}, {"text": "is non-linear and differentiable (even if the ReLU is not in one point). A historically used activation function is the logistic function:"}, {"text": "is non-linear and differentiable (even if the ReLU is not in one point). A historically used activation function is the logistic function:"}, {"text": "is non-linear and differentiable (even if the ReLU is not in one point). A historically used activation function is the logistic function:"}, {"text": "Below is an example of a learning algorithm for a single-layer perceptron. For multilayer perceptrons, where a hidden layer exists, more sophisticated algorithms such as backpropagation must be used. If the activation function or the underlying process being modeled by the perceptron is nonlinear, alternative learning algorithms such as the delta rule can be used as long as the activation function is differentiable."}, {"text": "Below is an example of a learning algorithm for a single-layer perceptron. For multilayer perceptrons, where a hidden layer exists, more sophisticated algorithms such as backpropagation must be used. If the activation function or the underlying process being modeled by the perceptron is nonlinear, alternative learning algorithms such as the delta rule can be used as long as the activation function is differentiable."}]}, {"question": "Why is information entropy", "positive_ctxs": [{"text": "Information provides a way to quantify the amount of surprise for an event measured in bits. Entropy provides a measure of the average amount of information needed to represent an event drawn from a probability distribution for a random variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The conditional quantum entropy is an entropy measure used in quantum information theory. It is a generalization of the conditional entropy of classical information theory."}, {"text": "Akaike information criterion (AIC) method of model selection, and a comparison with MML: Dowe, D.L. ; Gardner, S.; Oppy, G. (Dec 2007). Why Simplicity is no Problem for Bayesians\"."}, {"text": "In information theory, entropy is the measure of the amount of information that is missing before reception and is sometimes referred to as Shannon entropy. Shannon entropy is a broad and general concept used in information theory as well as thermodynamics. It was originally devised by Claude Shannon in 1948 to study the size of information of a transmitted message."}, {"text": "Then there is no uncertainty. The entropy is zero: each toss of the coin delivers no new information as the outcome of each coin toss is always certain.Entropy can be normalized by dividing it by information length. This ratio is called metric entropy and is a measure of the randomness of the information."}, {"text": "Another idea, championed by Edwin T. Jaynes, is to use the principle of maximum entropy (MAXENT). The motivation is that the Shannon entropy of a probability distribution measures the amount of information contained in the distribution. The larger the entropy, the less information is provided by the distribution."}, {"text": "Another idea, championed by Edwin T. Jaynes, is to use the principle of maximum entropy (MAXENT). The motivation is that the Shannon entropy of a probability distribution measures the amount of information contained in the distribution. The larger the entropy, the less information is provided by the distribution."}, {"text": "Another idea, championed by Edwin T. Jaynes, is to use the principle of maximum entropy (MAXENT). The motivation is that the Shannon entropy of a probability distribution measures the amount of information contained in the distribution. The larger the entropy, the less information is provided by the distribution."}]}, {"question": "Why do prices and income follow a log normal distribution", "positive_ctxs": [{"text": "While the returns for stocks usually have a normal distribution, the stock price itself is often log-normally distributed. This is because extreme moves become less likely as the stock's price approaches zero."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The errors are usually assumed to be uncorrelated across measurements, and follow a multivariate normal distribution. If the errors do not follow a multivariate normal distribution, generalized linear models may be used to relax assumptions about Y and U."}, {"text": "The errors are usually assumed to be uncorrelated across measurements, and follow a multivariate normal distribution. If the errors do not follow a multivariate normal distribution, generalized linear models may be used to relax assumptions about Y and U."}, {"text": "Fitting this model to observed prices, e.g., using the expectation-maximization algorithm, would tend to cluster the prices according to house type/neighborhood and reveal the spread of prices in each type/neighborhood. (Note that for values such as prices or incomes that are guaranteed to be positive and which tend to grow exponentially, a log-normal distribution might actually be a better model than a normal distribution.)"}, {"text": "Fitting this model to observed prices, e.g., using the expectation-maximization algorithm, would tend to cluster the prices according to house type/neighborhood and reveal the spread of prices in each type/neighborhood. (Note that for values such as prices or incomes that are guaranteed to be positive and which tend to grow exponentially, a log-normal distribution might actually be a better model than a normal distribution.)"}, {"text": "Fitting this model to observed prices, e.g., using the expectation-maximization algorithm, would tend to cluster the prices according to house type/neighborhood and reveal the spread of prices in each type/neighborhood. (Note that for values such as prices or incomes that are guaranteed to be positive and which tend to grow exponentially, a log-normal distribution might actually be a better model than a normal distribution.)"}, {"text": "The main difference between the two approaches is that the GLM strictly assumes that the residuals will follow a conditionally normal distribution, while the GLiM loosens this assumption and allows for a variety of other distributions from the exponential family for the residuals. Of note, the GLM is a special case of the GLiM in which the distribution of the residuals follow a conditionally normal distribution."}, {"text": "The main difference between the two approaches is that the GLM strictly assumes that the residuals will follow a conditionally normal distribution, while the GLiM loosens this assumption and allows for a variety of other distributions from the exponential family for the residuals. Of note, the GLM is a special case of the GLiM in which the distribution of the residuals follow a conditionally normal distribution."}]}, {"question": "What is the difference between nonresponse and response bias", "positive_ctxs": [{"text": "Response bias can be defined as the difference between the true values of variables in a study's net sample group and the values of variables obtained in the results of the same study. Nonresponse bias occurs when some respondents included in the sample do not respond."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A survey\u2019s response rate is the result of dividing the number of people who were interviewed by the total number of people in the sample who were eligible to participate and should have been interviewed. A low response rate can give rise to sampling bias if the nonresponse is unequal among the participants regarding exposure and/or outcome. Such bias is known as nonresponse bias."}, {"text": "Attrition bias is a kind of selection bias caused by attrition (loss of participants), discounting trial subjects/tests that did not run to completion. It is closely related to the survivorship bias, where only the subjects that \"survived\" a process are included in the analysis or the failure bias, where only the subjects that \"failed\" a process are included. It includes dropout, nonresponse (lower response rate), withdrawal and protocol deviators."}, {"text": "the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration."}, {"text": "the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration."}, {"text": "For many years, a survey's response rate was viewed as an important indicator of survey quality. Many observers presumed that higher response rates assure more accurate survey results (Aday 1996; Babbie 1990; Backstrom and Hursh 1963; Rea and Parker 1997). But because measuring the relation between nonresponse and the accuracy of a survey statistic is complex and expensive, few rigorously designed studies provided empirical evidence to document the consequences of lower response rates until recently."}, {"text": "will consist of the sum of the steady-state response and a transient response. The steady-state response is the output of the system in the limit of infinite time, and the transient response is the difference between the response and the steady state response (It corresponds to the homogeneous solution of the above differential equation.) The transfer function for an LTI system may be written as the product:"}, {"text": "The perceptron learning rule originates from the Hebbian assumption, and was used by Frank Rosenblatt in his perceptron in 1958. The net is passed to the activation (transfer) function and the function's output is used for adjusting the weights. The learning signal is the difference between the desired response and the actual response of a neuron."}]}, {"question": "What is the difference between linear and nonlinear association", "positive_ctxs": [{"text": "Linear means something related to a line. A non-linear equation is such which does not form a straight line. It looks like a curve in a graph and has a variable slope value. The major difference between linear and nonlinear equations is given here for the students to understand it in a more natural way."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics and in probability theory, distance correlation or distance covariance is a measure of dependence between two paired random vectors of arbitrary, not necessarily equal, dimension. The population distance correlation coefficient is zero if and only if the random vectors are independent. Thus, distance correlation measures both linear and nonlinear association between two random variables or random vectors."}, {"text": "The null hypothesis is that there is no association between the treatment and the outcome. More precisely, the null hypothesis is"}, {"text": "The null hypothesis is that there is no association between the treatment and the outcome. More precisely, the null hypothesis is"}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}]}, {"question": "How do I use text mining in R", "positive_ctxs": [{"text": "The 5 main steps to create word clouds in RStep 1: Create a text file. Step 2 : Install and load the required packages. Step 3 : Text mining. Step 4 : Build a term-document matrix. Step 5 : Generate the Word cloud."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Biomedical text mining \u2013 (also known as BioNLP), this is text mining applied to texts and literature of the biomedical and molecular biology domain. It is a rather recent research field drawing elements from natural language processing, bioinformatics, medical informatics and computational linguistics. There is an increasing interest in text mining and information extraction strategies applied to the biomedical and molecular biology literature due to the increasing number of electronically available publications stored in databases such as PubMed."}, {"text": "Data wrangling is a superset of data mining and requires processes that some data mining uses, but not always. The process of data mining is to find patterns within large data sets, where data wrangling transforms data in order to deliver insights about that data. Even though data wrangling is a superset of data mining does not mean that data mining does not use it, there are many use cases for data wrangling in data mining."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "The Replicated Softmax Model is also an variant of restricted Boltzmann machine and commonly used to model word count vectors in a document. In a typical text mining problem, let"}, {"text": "US copyright law, and in particular its provision for fair use, upholds the legality of content mining in America, and other fair use countries such as Israel, Taiwan and South Korea. As content mining is transformative, that is it does not supplant the original work, it is viewed as being lawful under fair use. For example, as part of the Google Book settlement the presiding judge on the case ruled that Google's digitization project of in-copyright books was lawful, in part because of the transformative uses that the digitization project displayed\u2014one being text and data mining."}, {"text": "Syntactic or structural ambiguities are frequently found in humor and advertising. One of the most enduring jokes from the famous comedian Groucho Marx was his quip that used a modifier attachment ambiguity: \"I shot an elephant in my pajamas. How he got into my pajamas I don't know.\""}, {"text": "Co-training is a machine learning algorithm used when there are only small amounts of labeled data and large amounts of unlabeled data. One of its uses is in text mining for search engines. It was introduced by Avrim Blum and Tom Mitchell in 1998."}]}, {"question": "What is the best statistical analysis technique", "positive_ctxs": [{"text": "5 Most Important Methods For Statistical Data AnalysisMean. The arithmetic mean, more commonly known as \u201cthe average,\u201d is the sum of a list of numbers divided by the number of items on the list. Standard Deviation. Regression. Sample Size Determination. Hypothesis Testing."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "Log-linear analysis is a technique used in statistics to examine the relationship between more than two categorical variables. The technique is used for both hypothesis testing and model building. In both these uses, models are tested to find the most parsimonious (i.e., least complex) model that best accounts for the variance in the observed frequencies."}, {"text": "Sparse principal component analysis (sparse PCA) is a specialised technique used in statistical analysis and, in particular, in the analysis of multivariate data sets. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by introducing sparsity structures to the input variables."}, {"text": "It is common to make decisions under uncertainty. What can be done to make good (or at least the best possible) decisions under conditions of uncertainty? Info-gap robustness analysis evaluates each feasible decision by asking: how much deviation from an estimate of a parameter value, function, or set, is permitted and yet \"guarantee\" acceptable performance?"}, {"text": "Factor analysis is a frequently used technique in cross-cultural research. It serves the purpose of extracting cultural dimensions. The best known cultural dimensions models are those elaborated by Geert Hofstede, Ronald Inglehart, Christian Welzel, Shalom Schwartz and Michael Minkov."}, {"text": "Principal component analysis can be employed in a nonlinear way by means of the kernel trick. The resulting technique is capable of constructing nonlinear mappings that maximize the variance in the data. The resulting technique is entitled kernel PCA."}, {"text": "Principal component analysis can be employed in a nonlinear way by means of the kernel trick. The resulting technique is capable of constructing nonlinear mappings that maximize the variance in the data. The resulting technique is entitled kernel PCA."}]}, {"question": "What's the big deal about Big Data", "positive_ctxs": [{"text": "Big data is a big deal. From reducing their costs and making better decisions, to creating products and services that are in demand by customers, businesses will increasingly benefit by using big-data analytics."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The 'V' model of Big Data is concerting as it centres around computational scalability and lacks in a loss around the perceptibility and understandability of information. This led to the framework of cognitive big data, which characterizes Big Data application according to:"}, {"text": "The 'V' model of Big Data is concerting as it centres around computational scalability and lacks in a loss around the perceptibility and understandability of information. This led to the framework of cognitive big data, which characterizes Big Data application according to:"}, {"text": "In 2012, the Obama administration announced the Big Data Research and Development Initiative, to explore how big data could be used to address important problems faced by the government. The initiative is composed of 84 different big data programs spread across six departments."}, {"text": "In 2012, the Obama administration announced the Big Data Research and Development Initiative, to explore how big data could be used to address important problems faced by the government. The initiative is composed of 84 different big data programs spread across six departments."}, {"text": "Big Data has been used in policing and surveillance by institutions like law enforcement and corporations. Due to the less visible nature of data-based surveillance as compared to traditional method of policing, objections to big data policing are less likely to arise. According to Sarah Brayne's Big Data Surveillance: The Case of Policing, big data policing can reproduce existing societal inequalities in three ways:"}, {"text": "Big Data has been used in policing and surveillance by institutions like law enforcement and corporations. Due to the less visible nature of data-based surveillance as compared to traditional method of policing, objections to big data policing are less likely to arise. According to Sarah Brayne's Big Data Surveillance: The Case of Policing, big data policing can reproduce existing societal inequalities in three ways:"}, {"text": "The U.S. state of Massachusetts announced the Massachusetts Big Data Initiative in May 2012, which provides funding from the state government and private companies to a variety of research institutions. The Massachusetts Institute of Technology hosts the Intel Science and Technology Center for Big Data in the MIT Computer Science and Artificial Intelligence Laboratory, combining government, corporate, and institutional funding and research efforts.The European Commission is funding the 2-year-long Big Data Public Private Forum through their Seventh Framework Program to engage companies, academics and other stakeholders in discussing big data issues. The project aims to define a strategy in terms of research and innovation to guide supporting actions from the European Commission in the successful implementation of the big data economy."}]}, {"question": "Which search method is used in Minimax algorithm", "positive_ctxs": [{"text": "recursion"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A breadth-first search (BFS) is another technique for traversing a finite graph. BFS visits the sibling vertices before visiting the child vertices, and a queue is used in the search process. This algorithm is often used to find the shortest path from one vertex to another."}, {"text": "The A* search algorithm is an example of a best-first search algorithm, as is B*. Best-first algorithms are often used for path finding in combinatorial search. Neither A* nor B* is a greedy best-first search, as they incorporate the distance from the start in addition to estimated distances to the goal."}, {"text": "Frequently, in game theory, maximin is distinct from minimax. Minimax is used in zero-sum games to denote minimizing the opponent's maximum payoff. In a zero-sum game, this is identical to minimizing one's own maximum loss, and to maximizing one's own minimum gain."}, {"text": "Frequently, in game theory, maximin is distinct from minimax. Minimax is used in zero-sum games to denote minimizing the opponent's maximum payoff. In a zero-sum game, this is identical to minimizing one's own maximum loss, and to maximizing one's own minimum gain."}, {"text": "Frequently, in game theory, maximin is distinct from minimax. Minimax is used in zero-sum games to denote minimizing the opponent's maximum payoff. In a zero-sum game, this is identical to minimizing one's own maximum loss, and to maximizing one's own minimum gain."}, {"text": "Frequently, in game theory, maximin is distinct from minimax. Minimax is used in zero-sum games to denote minimizing the opponent's maximum payoff. In a zero-sum game, this is identical to minimizing one's own maximum loss, and to maximizing one's own minimum gain."}, {"text": "In computer science, beam search is a heuristic search algorithm that explores a graph by expanding the most promising node in a limited set. Beam search is an optimization of best-first search that reduces its memory requirements. Best-first search is a graph search which orders all partial solutions (states) according to some heuristic."}]}, {"question": "What is the difference between linear and nonlinear filters", "positive_ctxs": [{"text": "Linear filtering is the filtering method in which the value of output pixel is linear combinations of the neighbouring input pixels. A non-linear filtering is one that cannot be done with convolution or Fourier multiplication. A sliding median filter is a simple example of a non-linear filter."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "However, nonlinear filters are considerably harder to use and design than linear ones, because the most powerful mathematical tools of signal analysis (such as the impulse response and the frequency response) cannot be used on them. Thus, for example, linear filters are often used to remove noise and distortion that was created by nonlinear processes, simply because the proper non-linear filter would be too hard to design and construct."}, {"text": "From the foregoing, we can know that the nonlinear filters have quite different behavior compared to linear filters. The most important characteristic is that, for nonlinear filters, the filter output or response of the filter does not obey the principles outlined earlier, particularly scaling and shift invariance. Furthermore, a nonlinear filter can produce results that vary in a non-intuitive manner."}, {"text": "Time-dependent input is transformed by complex linear and nonlinear filters into a spike train in the output. Again, the spike response model or the adaptive integrate-and-fire model enable to predict the spike train in the output for arbitrary time-dependent input, whereas an artificial neuron or a simple leaky integrate-and-fire does not."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}]}, {"question": "What is regression example", "positive_ctxs": [{"text": "Linear regression quantifies the relationship between one or more predictor variable(s) and one outcome variable. For example, it can be used to quantify the relative impacts of age, gender, and diet (the predictor variables) on height (the outcome variable)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "What is more there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.A simple example of modus ponens often used in introductory logic books is \"If you are human then you are mortal\". This can be represented in pseudocode as:"}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}]}, {"question": "What does a left skewed distribution mean", "positive_ctxs": [{"text": "For skewed distributions, it is quite common to have one tail of the distribution considerably longer or drawn out relative to the other tail. A \"skewed right\" distribution is one in which the tail is on the right side. A \"skewed left\" distribution is one in which the tail is on the left side."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "negative skew: The left tail is longer; the mass of the distribution is concentrated on the right of the figure. The distribution is said to be left-skewed, left-tailed, or skewed to the left, despite the fact that the curve itself appears to be skewed or leaning to the right; left instead refers to the left tail being drawn out and, often, the mean being skewed to the left of a typical center of the data. A left-skewed distribution usually appears as a right-leaning curve."}, {"text": "negative skew: The left tail is longer; the mass of the distribution is concentrated on the right of the figure. The distribution is said to be left-skewed, left-tailed, or skewed to the left, despite the fact that the curve itself appears to be skewed or leaning to the right; left instead refers to the left tail being drawn out and, often, the mean being skewed to the left of a typical center of the data. A left-skewed distribution usually appears as a right-leaning curve."}, {"text": "positive skew: The right tail is longer; the mass of the distribution is concentrated on the left of the figure. The distribution is said to be right-skewed, right-tailed, or skewed to the right, despite the fact that the curve itself appears to be skewed or leaning to the left; right instead refers to the right tail being drawn out and, often, the mean being skewed to the right of a typical center of the data. A right-skewed distribution usually appears as a left-leaning curve."}, {"text": "positive skew: The right tail is longer; the mass of the distribution is concentrated on the left of the figure. The distribution is said to be right-skewed, right-tailed, or skewed to the right, despite the fact that the curve itself appears to be skewed or leaning to the left; right instead refers to the right tail being drawn out and, often, the mean being skewed to the right of a typical center of the data. A right-skewed distribution usually appears as a left-leaning curve."}, {"text": "The third central moment is the measure of the lopsidedness of the distribution; any symmetric distribution will have a third central moment, if defined, of zero. The normalised third central moment is called the skewness, often \u03b3. A distribution that is skewed to the left (the tail of the distribution is longer on the left) will have a negative skewness."}, {"text": "Similarly, for \u03b2/\u03b1 \u2192 \u221e, or for \u03b1/\u03b2 \u2192 0, the mean is located at the left end, x = 0. The beta distribution becomes a 1-point Degenerate distribution with a Dirac delta function spike at the left end, x = 0, with probability 1, and zero probability everywhere else. There is 100% probability (absolute certainty) concentrated at the left end, x = 0."}, {"text": "A distribution that is skewed to the right (the tail of the distribution is longer on the right), will have a positive skewness."}]}, {"question": "What is the correlation coefficient in a linear regression", "positive_ctxs": [{"text": "Pearson's product moment correlation coefficient (r) is given as a measure of linear association between the two variables: r\u00b2 is the proportion of the total variance (s\u00b2) of Y that can be explained by the linear regression of Y on x. 1-r\u00b2 is the proportion that is not explained by the regression."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is a corollary of the Cauchy\u2013Schwarz inequality that the absolute value of the Pearson correlation coefficient is not bigger than 1. Therefore, the value of a correlation coefficient ranges between -1 and +1. The correlation coefficient is +1 in the case of a perfect direct (increasing) linear relationship (correlation), \u22121 in the case of a perfect inverse (decreasing) linear relationship (anticorrelation), and some value in the open interval"}, {"text": "It is a corollary of the Cauchy\u2013Schwarz inequality that the absolute value of the Pearson correlation coefficient is not bigger than 1. Therefore, the value of a correlation coefficient ranges between -1 and +1. The correlation coefficient is +1 in the case of a perfect direct (increasing) linear relationship (correlation), \u22121 in the case of a perfect inverse (decreasing) linear relationship (anticorrelation), and some value in the open interval"}, {"text": "The square of the sample correlation coefficient is typically denoted r2 and is a special case of the coefficient of determination. In this case, it estimates the fraction of the variance in Y that is explained by X in a simple linear regression. So if we have the observed dataset"}, {"text": "The square of the sample correlation coefficient is typically denoted r2 and is a special case of the coefficient of determination. In this case, it estimates the fraction of the variance in Y that is explained by X in a simple linear regression. So if we have the observed dataset"}, {"text": "This is equal to the formula given above. As a correlation coefficient, the Matthews correlation coefficient is the geometric mean of the regression coefficients of the problem and its dual. The component regression coefficients of the Matthews correlation coefficient are Markedness (\u0394p) and Youden's J statistic (Informedness or \u0394p')."}, {"text": "In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient \u2013 the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t test."}, {"text": "In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient \u2013 the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t test."}]}, {"question": "How do you measure risk and return", "positive_ctxs": [{"text": "Investment risk is the idea that an investment will not perform as expected, that its actual return will deviate from the expected return. Risk is measured by the amount of volatility, that is, the difference between actual returns and average (expected) returns."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "The fundamental concept of risk is that as it increases, the expected return on an investment should increase as well, an increase known as the risk premium. In other words, investors should expect a higher return on an investment when that investment carries a higher level of risk or uncertainty. When evaluating investments, investors should estimate both the expected return and the uncertainty of future returns."}, {"text": "The fundamental concept of risk is that as it increases, the expected return on an investment should increase as well, an increase known as the risk premium. In other words, investors should expect a higher return on an investment when that investment carries a higher level of risk or uncertainty. When evaluating investments, investors should estimate both the expected return and the uncertainty of future returns."}, {"text": "The risk difference (RD), sometimes called absolute risk reduction, is simply the difference in risk (probability) of an event between two groups. It is a useful measure in experimental research, since RD tells you the extent to which an experimental interventions changes the probability of an event or outcome. Using the example above, the probabilities for those in the control group and treatment group passing is 2/3 (or 0.67) and 6/7 (or 0.86), respectively, and so the RD effect size is 0.86 \u2212 0.67 = 0.19 (or 19%)."}, {"text": "One measure of the statistical risk of a continuous variable, such as the return on an investment, is simply the estimated variance of the variable, or equivalently the square root of the variance, called the standard deviation. Another measure in finance, one which views upside risk as unimportant compared to downside risk, is the downside beta. In the context of a binary variable, a simple statistical measure of risk is simply the probability that a variable will take on the lower of two values."}, {"text": ".Unlike ARA whose units are in $\u22121, RRA is a dimension-less quantity, which allows it to be applied universally. Like for absolute risk aversion, the corresponding terms constant relative risk aversion (CRRA) and decreasing/increasing relative risk aversion (DRRA/IRRA) are used. This measure has the advantage that it is still a valid measure of risk aversion, even if the utility function changes from risk averse to risk loving as c varies, i.e."}]}, {"question": "What is an example of probability distribution", "positive_ctxs": [{"text": "The probability distribution of a discrete random variable can always be represented by a table. For example, suppose you flip a coin two times. The probability of getting 0 heads is 0.25; 1 head, 0.50; and 2 heads, 0.25. Thus, the table is an example of a probability distribution for a discrete random variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "is an example of a hyperprior distribution. The notation of the distribution of Y changes as another parameter is added, i.e."}, {"text": "A probability distribution can be viewed as a partition of a set. One may then ask: if a set were partitioned randomly, what would the distribution of probabilities be? What would the expectation value of the mutual information be?"}, {"text": "Given a known joint distribution of two discrete random variables, say, X and Y, the marginal distribution of either variable \u2013 X for example \u2014 is the probability distribution of X when the values of Y are not taken into consideration. This can be calculated by summing the joint probability distribution over all values of Y. Naturally, the converse is also true: the marginal distribution can be obtained for Y by summing over the separate values of X."}, {"text": "Given a known joint distribution of two discrete random variables, say, X and Y, the marginal distribution of either variable \u2013 X for example \u2014 is the probability distribution of X when the values of Y are not taken into consideration. This can be calculated by summing the joint probability distribution over all values of Y. Naturally, the converse is also true: the marginal distribution can be obtained for Y by summing over the separate values of X."}, {"text": "Given a known joint distribution of two discrete random variables, say, X and Y, the marginal distribution of either variable \u2013 X for example \u2014 is the probability distribution of X when the values of Y are not taken into consideration. This can be calculated by summing the joint probability distribution over all values of Y. Naturally, the converse is also true: the marginal distribution can be obtained for Y by summing over the separate values of X."}, {"text": "Suppose there are n people at a party, each of whom brought an umbrella. At the end of the party everyone picks an umbrella out of the stack of umbrellas and leaves. What is the probability that no one left with his/her own umbrella?"}]}, {"question": "How do you perform a binary search", "positive_ctxs": [{"text": "Binary Search: Search a sorted array by repeatedly dividing the search interval in half. Begin with an interval covering the whole array. If the value of the search key is less than the item in the middle of the interval, narrow the interval to the lower half."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "\"You cannot legitimately test a hypothesis on the same data that first suggested that hypothesis. Once you have a hypothesis, design a study to search specifically for the effect you now think is there. If the result of this test is statistically significant, you have real evidence at last.\""}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "There are at least three types of awareness: agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness, you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness, you may be aware that you must search for a lost object, but are not now conscious of it."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "What s so special about rectified linear units ReLU activation function", "positive_ctxs": [{"text": "The rectified linear activation function or ReLU for short is a piecewise linear function that will output the input directly if it is positive, otherwise, it will output zero. The rectified linear activation function overcomes the vanishing gradient problem, allowing models to learn faster and perform better."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The rectifier is, as of 2017, the most popular activation function for deep neural networks.A unit employing the rectifier is also called a rectified linear unit (ReLU).Rectified linear units find applications in computer vision and speech recognition using deep neural nets and computational neuroscience."}, {"text": "The rectifier is, as of 2017, the most popular activation function for deep neural networks.A unit employing the rectifier is also called a rectified linear unit (ReLU).Rectified linear units find applications in computer vision and speech recognition using deep neural nets and computational neuroscience."}, {"text": "This is the reason why backpropagation requires the activation function to be differentiable. (Nevertheless, the ReLU activation function, which is non-differentiable at 0, has become quite popular, e.g."}, {"text": "This is the reason why backpropagation requires the activation function to be differentiable. (Nevertheless, the ReLU activation function, which is non-differentiable at 0, has become quite popular, e.g."}, {"text": "This is the reason why backpropagation requires the activation function to be differentiable. (Nevertheless, the ReLU activation function, which is non-differentiable at 0, has become quite popular, e.g."}, {"text": "This is the reason why backpropagation requires the activation function to be differentiable. (Nevertheless, the ReLU activation function, which is non-differentiable at 0, has become quite popular, e.g."}, {"text": "This is the reason why backpropagation requires the activation function to be differentiable. (Nevertheless, the ReLU activation function, which is non-differentiable at 0, has become quite popular, e.g."}]}, {"question": "What is the purpose of a regression model", "positive_ctxs": [{"text": "Typically, a regression analysis is done for one of two purposes: In order to predict the value of the dependent variable for individuals for whom some information concerning the explanatory variables is available, or in order to estimate the effect of some explanatory variable on the dependent variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, a probit model is a type of regression where the dependent variable can take only two values, for example married or not married. The word is a portmanteau, coming from probability + unit. The purpose of the model is to estimate the probability that an observation with particular characteristics will fall into a specific one of the categories; moreover, classifying observations based on their predicted probabilities is a type of binary classification model."}, {"text": "In statistics, a probit model is a type of regression where the dependent variable can take only two values, for example married or not married. The word is a portmanteau, coming from probability + unit. The purpose of the model is to estimate the probability that an observation with particular characteristics will fall into a specific one of the categories; moreover, classifying observations based on their predicted probabilities is a type of binary classification model."}, {"text": "In statistics, the ordered logit model (also ordered logistic regression or proportional odds model) is an ordinal regression model\u2014that is, a regression model for ordinal dependent variables\u2014first considered by Peter McCullagh. For example, if one question on a survey is to be answered by a choice among \"poor\", \"fair\", \"good\", and \"excellent\", and the purpose of the analysis is to see how well that response can be predicted by the responses to other questions, some of which may be quantitative, then ordered logistic regression may be used. It can be thought of as an extension of the logistic regression model that applies to dichotomous dependent variables, allowing for more than two (ordered) response categories."}, {"text": "In statistics, Poisson regression is a generalized linear model form of regression analysis used to model count data and contingency tables. Poisson regression assumes the response variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. A Poisson regression model is sometimes known as a log-linear model, especially when used to model contingency tables."}, {"text": "To minimize MSE, the model could be more accurate, which would mean the model is closer to actual data. One example of a linear regression using this method is the least squares method\u2014which evaluates appropriateness of linear regression model to model bivariate dataset, but whose the limitation is related to known distribution of the data."}, {"text": "To minimize MSE, the model could be more accurate, which would mean the model is closer to actual data. One example of a linear regression using this method is the least squares method\u2014which evaluates appropriateness of linear regression model to model bivariate dataset, but whose the limitation is related to known distribution of the data."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}]}, {"question": "Is at test a statistical test", "positive_ctxs": [{"text": "T-test. A t-test is used to compare the mean of two given samples. Like a z-test, a t-test also assumes a normal distribution of the sample. A t-test is used when the population parameters (mean and standard deviation) are not known."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Consequential \u2013 What are the potential risks if the scores are invalid or inappropriately interpreted? Is the test still worthwhile given the risks?"}, {"text": "Consequential \u2013 What are the potential risks if the scores are invalid or inappropriately interpreted? Is the test still worthwhile given the risks?"}, {"text": "in some study is called a statistical hypothesis. If we state one hypothesis only and the aim of the statistical test is to see whether this hypothesis is tenable, but not, at the same time, to investigate other hypotheses, then such a test is called a significance test. Note that the hypothesis might specify the probability distribution of"}, {"text": "The desired level of statistical confidence also plays a role in reliability testing. Statistical confidence is increased by increasing either the test time or the number of items tested. Reliability test plans are designed to achieve the specified reliability at the specified confidence level with the minimum number of test units and test time."}, {"text": "In statistical significance testing, a one-tailed test and a two-tailed test are alternative ways of computing the statistical significance of a parameter inferred from a data set, in terms of a test statistic. A two-tailed test is appropriate if the estimated value is greater or less than a certain range of values, for example, whether a test taker may score above or below a specific range of scores. This method is used for null hypothesis testing and if the estimated value exists in the critical areas, the alternative hypothesis is accepted over the null hypothesis."}, {"text": "In statistical significance testing, a one-tailed test and a two-tailed test are alternative ways of computing the statistical significance of a parameter inferred from a data set, in terms of a test statistic. A two-tailed test is appropriate if the estimated value is greater or less than a certain range of values, for example, whether a test taker may score above or below a specific range of scores. This method is used for null hypothesis testing and if the estimated value exists in the critical areas, the alternative hypothesis is accepted over the null hypothesis."}, {"text": "Every statistical hypothesis test can be formulated as a comparison of statistical models. Hence, every statistical hypothesis test can be replicated via AIC. Two examples are briefly described in the subsections below."}]}, {"question": "What is a gradient norm", "positive_ctxs": [{"text": "Replaces an image by the norm of its gradient, as estimated by discrete filters. The Raw filter of the detail panel designates two filters that correspond to the two components of the gradient in the principal directions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Technically, it is a discrete differentiation operator, computing an approximation of the gradient of the image intensity function. At each point in the image, the result of the Sobel\u2013Feldman operator is either the corresponding gradient vector or the norm of this vector. The Sobel\u2013Feldman operator is based on convolving the image with a small, separable, and integer-valued filter in the horizontal and vertical directions and is therefore relatively inexpensive in terms of computations."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "This vector is called the gradient of f at a. If f is differentiable at every point in some domain, then the gradient is a vector-valued function \u2207f which takes the point a to the vector \u2207f(a). Consequently, the gradient produces a vector field."}, {"text": "What constitutes narrow or wide limits of agreement or large or small bias is a matter of a practical assessment in each case."}, {"text": "The Frobenius norm is submultiplicative and is very useful for numerical linear algebra. The submultiplicativity of Frobenius norm can be proved using Cauchy\u2013Schwarz inequality."}, {"text": "norm is a circle (in general an n-sphere), which is rotationally invariant and, therefore, has no corners. As seen in the figure, a convex object that lies tangent to the boundary, such as the line shown, is likely to encounter a corner (or a higher-dimensional equivalent) of a hypercube, for which some components of"}, {"text": "In density estimation, the unknown parameter is probability density itself. The loss function is typically chosen to be a norm in an appropriate function space. For example, for L2 norm,"}]}, {"question": "What is step size in machine learning", "positive_ctxs": [{"text": "The amount that the weights are updated during training is referred to as the step size or the \u201clearning rate.\u201d Specifically, the learning rate is a configurable hyperparameter used in the training of neural networks that has a small positive value, often in the range between 0.0 and 1.0."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly acquired information overrides old information, it metaphorically represents the speed at which a machine learning model \"learns\". In the adaptive control literature, the learning rate is commonly referred to as gain.In setting a learning rate, there is a trade-off between the rate of convergence and overshooting."}, {"text": "In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly acquired information overrides old information, it metaphorically represents the speed at which a machine learning model \"learns\". In the adaptive control literature, the learning rate is commonly referred to as gain.In setting a learning rate, there is a trade-off between the rate of convergence and overshooting."}, {"text": "In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly acquired information overrides old information, it metaphorically represents the speed at which a machine learning model \"learns\". In the adaptive control literature, the learning rate is commonly referred to as gain.In setting a learning rate, there is a trade-off between the rate of convergence and overshooting."}, {"text": "In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly acquired information overrides old information, it metaphorically represents the speed at which a machine learning model \"learns\". In the adaptive control literature, the learning rate is commonly referred to as gain.In setting a learning rate, there is a trade-off between the rate of convergence and overshooting."}, {"text": "In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly acquired information overrides old information, it metaphorically represents the speed at which a machine learning model \"learns\". In the adaptive control literature, the learning rate is commonly referred to as gain.In setting a learning rate, there is a trade-off between the rate of convergence and overshooting."}, {"text": "In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly acquired information overrides old information, it metaphorically represents the speed at which a machine learning model \"learns\". In the adaptive control literature, the learning rate is commonly referred to as gain.In setting a learning rate, there is a trade-off between the rate of convergence and overshooting."}, {"text": "In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly acquired information overrides old information, it metaphorically represents the speed at which a machine learning model \"learns\". In the adaptive control literature, the learning rate is commonly referred to as gain.In setting a learning rate, there is a trade-off between the rate of convergence and overshooting."}]}, {"question": "What is the role of the activation function in a neural network How does this function in a human neural network system", "positive_ctxs": [{"text": "Simply put, an activation function is a function that is added into an artificial neural network in order to help the network learn complex patterns in the data. When comparing with a neuron-based model that is in our brains, the activation function is at the end deciding what is to be fired to the next neuron."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "When the activation function is non-linear, then a two-layer neural network can be proven to be a universal function approximator. This is known as the Universal Approximation Theorem. The identity activation function does not satisfy this property."}, {"text": "The softmax function, also known as softargmax or normalized exponential function, is a generalization of the logistic function to multiple dimensions. It is used in multinomial logistic regression and is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom."}, {"text": "The softmax function, also known as softargmax or normalized exponential function, is a generalization of the logistic function to multiple dimensions. It is used in multinomial logistic regression and is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom."}, {"text": "In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control."}, {"text": "The most basic model of a neuron consists of an input with some synaptic weight vector and an activation function or transfer function inside the neuron determining output. This is the basic structure used for artificial neurons, which in a neural network often looks like"}, {"text": "Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence."}, {"text": "Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence."}]}, {"question": "What is the need of dimensionality reduction in data mining", "positive_ctxs": [{"text": "Dimensionality reduction is the process of reducing the number of random variables or attributes under consideration. High-dimensionality data reduction, as part of a data pre-processing-step, is extremely important in many real-world applications."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Sparse principal component analysis (sparse PCA) is a specialised technique used in statistical analysis and, in particular, in the analysis of multivariate data sets. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by introducing sparsity structures to the input variables."}, {"text": "Principal component analysis is used as a means of dimensionality reduction in the study of large data sets, such as those encountered in bioinformatics. In Q methodology, the eigenvalues of the correlation matrix determine the Q-methodologist's judgment of practical significance (which differs from the statistical significance of hypothesis testing; cf. criteria for determining the number of factors)."}, {"text": "Feature projection (also called Feature extraction) transforms the data from the high-dimensional space to a space of fewer dimensions. The data transformation may be linear, as in principal component analysis (PCA), but many nonlinear dimensionality reduction techniques also exist. For multidimensional data, tensor representation can be used in dimensionality reduction through multilinear subspace learning."}, {"text": "Feature projection (also called Feature extraction) transforms the data from the high-dimensional space to a space of fewer dimensions. The data transformation may be linear, as in principal component analysis (PCA), but many nonlinear dimensionality reduction techniques also exist. For multidimensional data, tensor representation can be used in dimensionality reduction through multilinear subspace learning."}, {"text": "The intrinsic dimensionality is two, because two variables (rotation and scale) were varied in order to produce the data. Information about the shape or look of a letter 'A' is not part of the intrinsic variables because it is the same in every instance. Nonlinear dimensionality reduction will discard the correlated information (the letter 'A') and recover only the varying information (rotation and scale)."}, {"text": "l-diversity, also written as \u2113-diversity, is a form of group based anonymization that is used to preserve privacy in data sets by reducing the granularity of a data representation. This reduction is a trade off that results in some loss of effectiveness of data management or mining algorithms in order to gain some privacy. The l-diversity model is an extension of the k-anonymity model which reduces the granularity of data representation using techniques including generalization and suppression such that any given record maps onto at least k-1 other records in the data."}, {"text": "The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The expression was coined by Richard E. Bellman when considering problems in dynamic programming.Dimensionally cursed phenomena occur in domains such as numerical analysis, sampling, combinatorics, machine learning, data mining and databases. The common theme of these problems is that when the dimensionality increases, the volume of the space increases so fast that the available data become sparse."}]}, {"question": "What will happen to AUC if I switch the positive and negative classes in the test data", "positive_ctxs": [{"text": "Your classifier would have learned an equal an opposite rule, with the same performance and same AUC / ROC curve."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Whether and how to use a neutral class depends on the nature of the data: if the data is clearly clustered into neutral, negative and positive language, it makes sense to filter the neutral language out and focus on the polarity between positive and negative sentiments. If, in contrast, the data are mostly neutral with small deviations towards positive and negative affect, this strategy would make it harder to clearly distinguish between the two poles."}, {"text": "A false positive is an error in binary classification in which a test result incorrectly indicates the presence of a condition such as a disease when the disease is not present, while a false negative is the opposite error where the test result incorrectly fails to indicate the presence of a condition when it is present. These are the two kinds of errors in a binary test, in contrast to the two kinds of correct result (a true positive and a true negative.) They are also known in medicine as a false positive (or false negative) diagnosis, and in statistical classification as a false positive (or false negative) error.In statistical hypothesis testing the analogous concepts are known as type I and type II errors, where a positive result corresponds to rejecting the null hypothesis, and a negative result corresponds to not rejecting the null hypothesis."}, {"text": "A false positive is an error in binary classification in which a test result incorrectly indicates the presence of a condition such as a disease when the disease is not present, while a false negative is the opposite error where the test result incorrectly fails to indicate the presence of a condition when it is present. These are the two kinds of errors in a binary test, in contrast to the two kinds of correct result (a true positive and a true negative.) They are also known in medicine as a false positive (or false negative) diagnosis, and in statistical classification as a false positive (or false negative) error.In statistical hypothesis testing the analogous concepts are known as type I and type II errors, where a positive result corresponds to rejecting the null hypothesis, and a negative result corresponds to not rejecting the null hypothesis."}, {"text": "A false positive is an error in binary classification in which a test result incorrectly indicates the presence of a condition such as a disease when the disease is not present, while a false negative is the opposite error where the test result incorrectly fails to indicate the presence of a condition when it is present. These are the two kinds of errors in a binary test, in contrast to the two kinds of correct result (a true positive and a true negative.) They are also known in medicine as a false positive (or false negative) diagnosis, and in statistical classification as a false positive (or false negative) error.In statistical hypothesis testing the analogous concepts are known as type I and type II errors, where a positive result corresponds to rejecting the null hypothesis, and a negative result corresponds to not rejecting the null hypothesis."}, {"text": "A false positive is an error in binary classification in which a test result incorrectly indicates the presence of a condition such as a disease when the disease is not present, while a false negative is the opposite error where the test result incorrectly fails to indicate the presence of a condition when it is present. These are the two kinds of errors in a binary test, in contrast to the two kinds of correct result (a true positive and a true negative.) They are also known in medicine as a false positive (or false negative) diagnosis, and in statistical classification as a false positive (or false negative) error.In statistical hypothesis testing the analogous concepts are known as type I and type II errors, where a positive result corresponds to rejecting the null hypothesis, and a negative result corresponds to not rejecting the null hypothesis."}, {"text": "The underlying issue is that there is a class imbalance between the positive class and the negative class. Prior probabilities for these classes need to be accounted for in error analysis. Precision and recall help, but precision too can be biased by very unbalanced class priors in the test sets."}, {"text": "When an individual being tested has a different pre-test probability of having a condition than the control groups used to establish the PPV and NPV, the PPV and NPV are generally distinguished from the positive and negative post-test probabilities, with the PPV and NPV referring to the ones established by the control groups, and the post-test probabilities referring to the ones for the tested individual (as estimated, for example, by likelihood ratios). Preferably, in such cases, a large group of equivalent individuals should be studied, in order to establish separate positive and negative predictive values for use of the test in such individuals."}]}, {"question": "What is posterior probability example", "positive_ctxs": [{"text": "Posterior probability = prior probability + new evidence (called likelihood). For example, historical data suggests that around 60% of students who start college will graduate within 6 years. This is the prior probability. However, you think that figure is actually much lower, so set out to collect new data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "As a result, this formula can be expressed as simply \"the posterior predictive probability of seeing a category is proportional to the total observed count of that category\", or as \"the expected count of a category is the same as the total observed count of the category\", where \"observed count\" is taken to include the pseudo-observations of the prior.The reason for the equivalence between posterior predictive probability and the expected value of the posterior distribution of p is evident with re-examination of the above formula. As explained in the posterior predictive distribution article, the formula for the posterior predictive probability has the form of an expected value taken with respect to the posterior distribution:"}, {"text": "(This intuition is ignoring the effect of the prior distribution. Furthermore, the posterior is a distribution over distributions. The posterior distribution in general describes the parameter in question, and in this case the parameter itself is a discrete probability distribution, i.e."}, {"text": "In Bayesian probability theory, if the posterior distributions p(\u03b8 | x) are in the same probability distribution family as the prior probability distribution p(\u03b8), the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function p(x | \u03b8). For example, the Gaussian family is conjugate to itself (or self-conjugate) with respect to a Gaussian likelihood function: if the likelihood function is Gaussian, choosing a Gaussian prior over the mean will ensure that the posterior distribution is also Gaussian. This means that the Gaussian distribution is a conjugate prior for the likelihood that is also Gaussian."}, {"text": "The mathematical statement of this problem is as follows: pick a random permutation on n elements and k values from the range 1 to n, also at random, call these marks. What is the probability that there is at least one mark on every cycle of the permutation? The claim is this probability is k/n."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "What is more there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.A simple example of modus ponens often used in introductory logic books is \"If you are human then you are mortal\". This can be represented in pseudocode as:"}]}, {"question": "Where can one find a training set for sentiment analysis", "positive_ctxs": [{"text": "It depends on the data you want and the project you're doing. You could use even your twitter data for sentiment analysis. Request your archive in twitter -> download -> analyse sentiment through supervised learning techniques."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This allows movement to a more sophisticated understanding of sentiment, because it is now possible to adjust the sentiment value of a concept relative to modifications that may surround it. Words, for example, that intensify, relax or negate the sentiment expressed by the concept can affect its score. Alternatively, texts can be given a positive and negative sentiment strength score if the goal is to determine the sentiment in a text rather than the overall polarity and strength of the text.There are various other types of sentiment analysis like- Aspect Based sentiment analysis, Grading sentiment analysis (positive,negative,neutral), Multilingual sentiment analysis and detection of emotions."}, {"text": "In general, the utility for practical commercial tasks of sentiment analysis as it is defined in academic research has been called into question, mostly since the simple one-dimensional model of sentiment from negative to positive yields rather little actionable information for a client worrying about the effect of public discourse on e.g. brand or corporate reputation.To better fit market needs, evaluation of sentiment analysis has moved to more task-based measures, formulated together with representatives from PR agencies and market research professionals. the RepLab evaluation data set is less on the content of the text under consideration and more on the effect of the text in question on brand reputation.Because evaluation of sentiment analysis is becoming more and more task based, each implementation needs a separate training model to get a more accurate representation of sentiment for a given data set."}, {"text": "Clearly, the high evaluated item should be recommended to the user. Based on these two motivations, a combination ranking score of similarity and sentiment rating can be constructed for each candidate item.Except for the difficulty of the sentiment analysis itself, applying sentiment analysis on reviews or feedback also faces the challenge of spam and biased reviews. One direction of work is focused on evaluating the helpfulness of each review."}, {"text": "Even though short text strings might be a problem, sentiment analysis within microblogging has shown that Twitter can be seen as a valid online indicator of political sentiment. Tweets' political sentiment demonstrates close correspondence to parties' and politicians' political positions, indicating that the content of Twitter messages plausibly reflects the offline political landscape. Furthermore, sentiment analysis on Twitter has also been shown to capture the public mood behind human reproduction cycles on a planetary scale, as well as other problems of public-health relevance such as adverse drug reactions."}, {"text": "This set of samples is called the training set. The classification problem is then to find a good predictor for the class"}, {"text": "This set of samples is called the training set. The classification problem is then to find a good predictor for the class"}, {"text": "This set of samples is called the training set. The classification problem is then to find a good predictor for the class"}]}, {"question": "How does random assignment control for confounding variables", "positive_ctxs": [{"text": "Random assignment helps reduce the chances of systematic differences between the groups at the start of an experiment and, thereby, mitigates the threats of confounding variables and alternative explanations. However, the process does not always equalize all of the confounding variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once the variables have been identified and defined, a procedure should then be implemented and group differences should be examined.In an experiment with random assignment, study units have the same chance of being assigned to a given treatment condition. As such, random assignment ensures that both the experimental and control groups are equivalent. In a quasi-experimental design, assignment to a given treatment condition is based on something other than random assignment."}, {"text": "A quasi-experiment is an empirical interventional study used to estimate the causal impact of an intervention on target population without random assignment. Quasi-experimental research shares similarities with the traditional experimental design or randomized controlled trial, but it specifically lacks the element of random assignment to treatment or control. Instead, quasi-experimental designs typically allow the researcher to control the assignment to the treatment condition, but using some criterion other than random assignment (e.g., an eligibility cutoff mark).Quasi-experiments are subject to concerns regarding internal validity, because the treatment and control groups may not be comparable at baseline."}, {"text": "This deficiency in randomization makes it harder to rule out confounding variables and introduces new threats to internal validity. Because randomization is absent, some knowledge about the data can be approximated, but conclusions of causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. Moreover, even if these threats to internal validity are assessed, causation still cannot be fully established because the experimenter does not have total control over extraneous variables.Disadvantages also include the study groups may provide weaker evidence because of the lack of randomness."}, {"text": "Depending on the type of quasi-experimental design, the researcher might have control over assignment to the treatment condition but use some criteria other than random assignment (e.g., a cutoff score) to determine which participants receive the treatment, or the researcher may have no control over the treatment condition assignment and the criteria used for assignment may be unknown. Factors such as cost, feasibility, political concerns, or convenience may influence how or if participants are assigned to a given treatment conditions, and as such, quasi-experiments are subject to concerns regarding internal validity (i.e., can the results of the experiment be used to make a causal inference?"}, {"text": "It is named after William G. Cochran, Nathan Mantel and William Haenszel. Extensions of this test to a categorical response and/or to several groups are commonly called Cochran\u2013Mantel\u2013Haenszel statistics. It is often used in observational studies where random assignment of subjects to different treatments cannot be controlled, but confounding covariates can be measured."}, {"text": "It is named after William G. Cochran, Nathan Mantel and William Haenszel. Extensions of this test to a categorical response and/or to several groups are commonly called Cochran\u2013Mantel\u2013Haenszel statistics. It is often used in observational studies where random assignment of subjects to different treatments cannot be controlled, but confounding covariates can be measured."}, {"text": "In controlled experiments of medical treatment options on humans, researchers randomly assign individuals to a treatment group or control group. This is done to reduce the confounding effect of irrelevant variables that are not being studied, such as the placebo effect."}]}, {"question": "What is map in ML", "positive_ctxs": [{"text": "In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is an estimate of an unknown quantity, that equals the mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The main disagreement is whether all of ML is part of AI, as this would mean that anyone using ML could claim they are using AI. Others have the view that not all of ML is part of AI where only an 'intelligent' subset of ML is part of AI.The question to what is the difference between ML and AI is answered by Judea Pearl in The Book of Why. Accordingly ML learns and predicts based on passive observations, whereas AI implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals."}, {"text": "The main disagreement is whether all of ML is part of AI, as this would mean that anyone using ML could claim they are using AI. Others have the view that not all of ML is part of AI where only an 'intelligent' subset of ML is part of AI.The question to what is the difference between ML and AI is answered by Judea Pearl in The Book of Why. Accordingly ML learns and predicts based on passive observations, whereas AI implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals."}, {"text": "The main disagreement is whether all of ML is part of AI, as this would mean that anyone using ML could claim they are using AI. Others have the view that not all of ML is part of AI where only an 'intelligent' subset of ML is part of AI.The question to what is the difference between ML and AI is answered by Judea Pearl in The Book of Why. Accordingly ML learns and predicts based on passive observations, whereas AI implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals."}, {"text": "The main disagreement is whether all of ML is part of AI, as this would mean that anyone using ML could claim they are using AI. Others have the view that not all of ML is part of AI where only an 'intelligent' subset of ML is part of AI.The question to what is the difference between ML and AI is answered by Judea Pearl in The Book of Why. Accordingly ML learns and predicts based on passive observations, whereas AI implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals."}, {"text": "The main disagreement is whether all of ML is part of AI, as this would mean that anyone using ML could claim they are using AI. Others have the view that not all of ML is part of AI where only an 'intelligent' subset of ML is part of AI.The question to what is the difference between ML and AI is answered by Judea Pearl in The Book of Why. Accordingly ML learns and predicts based on passive observations, whereas AI implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals."}, {"text": "The main disagreement is whether all of ML is part of AI, as this would mean that anyone using ML could claim they are using AI. Others have the view that not all of ML is part of AI where only an 'intelligent' subset of ML is part of AI.The question to what is the difference between ML and AI is answered by Judea Pearl in The Book of Why. Accordingly ML learns and predicts based on passive observations, whereas AI implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals."}, {"text": "The main disagreement is whether all of ML is part of AI, as this would mean that anyone using ML could claim they are using AI. Others have the view that not all of ML is part of AI where only an 'intelligent' subset of ML is part of AI.The question to what is the difference between ML and AI is answered by Judea Pearl in The Book of Why. Accordingly ML learns and predicts based on passive observations, whereas AI implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals."}]}, {"question": "How do you find the sample space", "positive_ctxs": [{"text": "The size of the sample space is the total number of possible outcomes. For example, when you roll 1 die, the sample space is 1, 2, 3, 4, 5, or 6. So the size of the sample space is 6."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Typically, when the sample space is finite, any subset of the sample space is an event (i.e. all elements of the power set of the sample space are defined as events). However, this approach does not work well in cases where the sample space is uncountably infinite."}, {"text": "Typically, when the sample space is finite, any subset of the sample space is an event (i.e. all elements of the power set of the sample space are defined as events). However, this approach does not work well in cases where the sample space is uncountably infinite."}]}, {"question": "How can you improve the accuracy of a logistic regression model in python", "positive_ctxs": [{"text": "Some of my suggestions to you would be:Feature Scaling and/or Normalization - Check the scales of your gre and gpa features. Class Imbalance - Look for class imbalance in your data. Optimize other scores - You can optimize on other metrics also such as Log Loss and F1-Score.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Lasso was introduced in order to improve the prediction accuracy and interpretability of regression models. It selects a reduced set of the known covariates for use in a model."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Logistic regression is a statistical model that in its basic form uses a logistic function to model a binary dependent variable, although many more complex extensions exist. In regression analysis, logistic regression (or logit regression) is estimating the parameters of a logistic model (a form of binary regression). Mathematically, a binary logistic model has a dependent variable with two possible values, such as pass/fail which is represented by an indicator variable, where the two values are labeled \"0\" and \"1\"."}, {"text": "Logistic regression is a statistical model that in its basic form uses a logistic function to model a binary dependent variable, although many more complex extensions exist. In regression analysis, logistic regression (or logit regression) is estimating the parameters of a logistic model (a form of binary regression). Mathematically, a binary logistic model has a dependent variable with two possible values, such as pass/fail which is represented by an indicator variable, where the two values are labeled \"0\" and \"1\"."}, {"text": "Logistic regression is a statistical model that in its basic form uses a logistic function to model a binary dependent variable, although many more complex extensions exist. In regression analysis, logistic regression (or logit regression) is estimating the parameters of a logistic model (a form of binary regression). Mathematically, a binary logistic model has a dependent variable with two possible values, such as pass/fail which is represented by an indicator variable, where the two values are labeled \"0\" and \"1\"."}, {"text": "There are multiple equivalent ways to describe the mathematical model underlying multinomial logistic regression. This can make it difficult to compare different treatments of the subject in different texts. The article on logistic regression presents a number of equivalent formulations of simple logistic regression, and many of these have analogues in the multinomial logit model."}]}, {"question": "How do you get global minima in K means algorithm", "positive_ctxs": [{"text": "The k-means problem is finding the least-squares assignment to centroids. There are multiple algorithms for finding a solution. There is an obvious approach to find the global optimum: enumerating all k^n possible assignments - that will yield a global minimum, but in exponential runtime."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Finding global maxima and minima is the goal of mathematical optimization. If a function is continuous on a closed interval, then by the extreme value theorem, global maxima and minima exist. Furthermore, a global maximum (or minimum) either must be a local maximum (or minimum) in the interior of the domain, or must lie on the boundary of the domain."}, {"text": "This expression means that y is equal to the power that you would raise b to, to get x. This operation undoes exponentiation because the logarithm of x tells you the exponent that the base has been raised to."}, {"text": "Leila Schneps and Coralie Colmez, Math on trial. How numbers get used and abused in the courtroom, Basic Books, 2013. (Sixth chapter: \"Math error number 6: Simpson's paradox."}, {"text": "Leila Schneps and Coralie Colmez, Math on trial. How numbers get used and abused in the courtroom, Basic Books, 2013. (First chapter: \"Math error number 1: multiplying non-independent probabilities."}, {"text": "Many studies have attempted to improve the convergence behavior of the algorithm and maximize the chances of attaining the global optimum (or at least, local minima of better quality). Initialization and restart techniques discussed in the previous sections are one alternative to find better solutions. More recently, mathematical programming algorithms based on branch-and-bound and column generation have produced \u2018\u2019provenly optimal\u2019\u2019 solutions for datasets with up to 2,300 entities."}, {"text": "Many studies have attempted to improve the convergence behavior of the algorithm and maximize the chances of attaining the global optimum (or at least, local minima of better quality). Initialization and restart techniques discussed in the previous sections are one alternative to find better solutions. More recently, mathematical programming algorithms based on branch-and-bound and column generation have produced \u2018\u2019provenly optimal\u2019\u2019 solutions for datasets with up to 2,300 entities."}]}, {"question": "What happens when the p value is lower than the level of significance", "positive_ctxs": [{"text": "If a p-value is lower than our significance level, we reject the null hypothesis. If not, we fail to reject the null hypothesis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Although in principle the acceptable level of statistical significance may be subject to debate, the significance level is the largest p-value that allows the test to reject the null hypothesis. This test is logically equivalent to saying that the p-value is the probability, assuming the null hypothesis is true, of observing a result at least as extreme as the test statistic. Therefore, the smaller the significance level, the lower the probability of committing type I error."}, {"text": "Although in principle the acceptable level of statistical significance may be subject to debate, the significance level is the largest p-value that allows the test to reject the null hypothesis. This test is logically equivalent to saying that the p-value is the probability, assuming the null hypothesis is true, of observing a result at least as extreme as the test statistic. Therefore, the smaller the significance level, the lower the probability of committing type I error."}, {"text": "Although in principle the acceptable level of statistical significance may be subject to debate, the significance level is the largest p-value that allows the test to reject the null hypothesis. This test is logically equivalent to saying that the p-value is the probability, assuming the null hypothesis is true, of observing a result at least as extreme as the test statistic. Therefore, the smaller the significance level, the lower the probability of committing type I error."}, {"text": "Although in principle the acceptable level of statistical significance may be subject to debate, the significance level is the largest p-value that allows the test to reject the null hypothesis. This test is logically equivalent to saying that the p-value is the probability, assuming the null hypothesis is true, of observing a result at least as extreme as the test statistic. Therefore, the smaller the significance level, the lower the probability of committing type I error."}, {"text": "Although in principle the acceptable level of statistical significance may be subject to debate, the significance level is the largest p-value that allows the test to reject the null hypothesis. This test is logically equivalent to saying that the p-value is the probability, assuming the null hypothesis is true, of observing a result at least as extreme as the test statistic. Therefore, the smaller the significance level, the lower the probability of committing type I error."}, {"text": "The textbook method is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the degrees of freedom of the numerator and the denominator and the significance level (\u03b1). If F \u2265 FCritical, the null hypothesis is rejected."}, {"text": "The textbook method is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the degrees of freedom of the numerator and the denominator and the significance level (\u03b1). If F \u2265 FCritical, the null hypothesis is rejected."}]}, {"question": "How do you construct a less than cumulative frequency distribution", "positive_ctxs": [{"text": "1:314:30Suggested clip \u00b7 120 secondsCumulative Frequency Distribution (Less than and More than YouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The concept of the cumulative distribution function makes an explicit appearance in statistical analysis in two (similar) ways. Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The empirical distribution function is a formal direct estimate of the cumulative distribution function for which simple statistical properties can be derived and which can form the basis of various statistical hypothesis tests."}, {"text": "A graph of the cumulative probability of failures up to each time point is called the cumulative distribution function, or CDF. In survival analysis, the cumulative distribution function gives the probability that the survival time is less than or equal to a specific time, t."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "In the case of cumulative frequency there are only two possibilities: a certain reference value X is exceeded or it is not exceeded. The sum of frequency of exceedance and cumulative frequency is 1 or 100%. Therefore, the binomial distribution can be used in estimating the range of the random error."}, {"text": "When two genes are close together on the same chromosome, they do not assort independently and are said to be linked. Whereas genes located on different chromosomes assort independently and have a recombination frequency of 50%, linked genes have a recombination frequency that is less than 50%."}, {"text": "Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The phenomenon may be time- or space-dependent. Cumulative frequency is also called frequency of non-exceedance."}, {"text": "If successful, the known equation is enough to report the frequency distribution and a table of data will not be required. Further, the equation helps interpolation and extrapolation. However, care should be taken with extrapolating a cumulative frequency distribution, because this may be a source of errors."}]}, {"question": "Is AI all about simulating human intelligence", "positive_ctxs": [{"text": "Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "David Gelernter writes, \"No computer will be creative unless it can simulate all the nuances of human emotion.\" This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future."}, {"text": "David Gelernter writes, \"No computer will be creative unless it can simulate all the nuances of human emotion.\" This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future."}, {"text": "Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes \"superintelligent\", then it could become difficult or impossible for humans to control."}, {"text": "Seed AI is a significant part of some theories about the technological singularity: proponents believe that the development of seed AI will rapidly yield ever-smarter intelligence (via bootstrapping) and thus a new era."}, {"text": "By the 1980s, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into \"sub-symbolic\" approaches to specific AI problems. Sub-symbolic methods manage to approach intelligence without specific representations of knowledge."}]}, {"question": "What is vanishing gradient problem in RNN", "positive_ctxs": [{"text": "For the vanishing gradient problem, the further you go through the network, the lower your gradient is and the harder it is to train the weights, which has a domino effect on all of the further weights throughout the network. That was the main roadblock to using Recurrent Neural Networks."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In turn this helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining unpredictable events.A generative model partially overcame the vanishing gradient problem of automatic differentiation or backpropagation in neural networks in 1992. In 1993, such a system solved a \u201cVery Deep Learning\u201d task that required more than 1000 subsequent layers in an RNN unfolded in time."}, {"text": "In turn this helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining unpredictable events.A generative model partially overcame the vanishing gradient problem of automatic differentiation or backpropagation in neural networks in 1992. In 1993, such a system solved a \u201cVery Deep Learning\u201d task that required more than 1000 subsequent layers in an RNN unfolded in time."}, {"text": "In turn this helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining unpredictable events.A generative model partially overcame the vanishing gradient problem of automatic differentiation or backpropagation in neural networks in 1992. In 1993, such a system solved a \u201cVery Deep Learning\u201d task that required more than 1000 subsequent layers in an RNN unfolded in time."}, {"text": "In turn this helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining unpredictable events.A generative model partially overcame the vanishing gradient problem of automatic differentiation or backpropagation in neural networks in 1992. In 1993, such a system solved a \u201cVery Deep Learning\u201d task that required more than 1000 subsequent layers in an RNN unfolded in time."}, {"text": "In turn this helps the automatizer to make many of its once unpredictable inputs predictable, such that the chunker can focus on the remaining unpredictable events.A generative model partially overcame the vanishing gradient problem of automatic differentiation or backpropagation in neural networks in 1992. In 1993, such a system solved a \u201cVery Deep Learning\u201d task that required more than 1000 subsequent layers in an RNN unfolded in time."}, {"text": "Hardware advances have meant that from 1991 to 2015, computer power (especially as delivered by GPUs) has increased around a million-fold, making standard backpropagation feasible for networks several layers deeper than when the vanishing gradient problem was recognized. Schmidhuber notes that this \"is basically what is winning many of the image recognition competitions now\", but that it \"does not really overcome the problem in a fundamental way\" since the original models tackling the vanishing gradient problem by Hinton and others were trained in a Xeon processor, not GPUs."}, {"text": "Hardware advances have meant that from 1991 to 2015, computer power (especially as delivered by GPUs) has increased around a million-fold, making standard backpropagation feasible for networks several layers deeper than when the vanishing gradient problem was recognized. Schmidhuber notes that this \"is basically what is winning many of the image recognition competitions now\", but that it \"does not really overcome the problem in a fundamental way\" since the original models tackling the vanishing gradient problem by Hinton and others were trained in a Xeon processor, not GPUs."}]}, {"question": "What are the different types of parametric tests", "positive_ctxs": [{"text": "Hypothesis Tests of the Mean and MedianParametric tests (means)Nonparametric tests (medians)1-sample t test1-sample Sign, 1-sample Wilcoxon2-sample t testMann-Whitney testOne-Way ANOVAKruskal-Wallis, Mood's median testFactorial DOE with one factor and one blocking variableFriedman test"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation \u03c72 test of association, a permutation version of Aly's test for comparing variances and so on."}, {"text": "Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation \u03c72 test of association, a permutation version of Aly's test for comparing variances and so on."}, {"text": "Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation \u03c72 test of association, a permutation version of Aly's test for comparing variances and so on."}, {"text": "Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation \u03c72 test of association, a permutation version of Aly's test for comparing variances and so on."}, {"text": "Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation \u03c72 test of association, a permutation version of Aly's test for comparing variances and so on."}, {"text": "Parametric tests, such as those described in exact statistics, are exact tests when the parametric assumptions are fully met, but in practice the use of the term exact (significance) test is reserved for those tests that do not rest on parametric assumptions \u2013 non-parametric tests. However, in practice most implementations of non-parametric test software use asymptotical algorithms for obtaining the significance value, which makes the implementation of the test non-exact."}, {"text": "The Lehmann test is a parametric test of two variances. Of this test there are several variants known. Other tests of the equality of variances include the Box test, the Box\u2013Anderson test and the Moses test."}]}, {"question": "Does linear regression show correlation", "positive_ctxs": [{"text": "Simple linear regression relates X to Y through an equation of the form Y = a + bX. Both quantify the direction and strength of the relationship between two numeric variables. The correlation squared (r2 or R2) has special meaning in simple linear regression."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A correlation between age and height in children is fairly causally transparent, but a correlation between mood and health in people is less so. Does improved mood lead to improved health, or does good health lead to good mood, or both? Or does some other factor underlie both?"}, {"text": "A correlation between age and height in children is fairly causally transparent, but a correlation between mood and health in people is less so. Does improved mood lead to improved health, or does good health lead to good mood, or both? Or does some other factor underlie both?"}, {"text": "It can be computationally expensive to solve the linear regression problems. Actually, the nth-order partial correlation (i.e., with |Z| = n) can be easily computed from three (n - 1)th-order partial correlations. The zeroth-order partial correlation \u03c1XY\u00b7\u00d8 is defined to be the regular correlation coefficient \u03c1XY."}, {"text": "When evaluating the goodness-of-fit of simulated (Ypred) vs. measured (Yobs) values, it is not appropriate to base this on the R2 of the linear regression (i.e., Yobs= m\u00b7Ypred + b). The R2 quantifies the degree of any linear correlation between Yobs and Ypred, while for the goodness-of-fit evaluation only one specific linear correlation should be taken into consideration: Yobs = 1\u00b7Ypred + 0 (i.e., the 1:1 line)."}, {"text": "In contrast, the marginal effect of xj on y can be assessed using a correlation coefficient or simple linear regression model relating only xj to y; this effect is the total derivative of y with respect to xj."}, {"text": "In contrast, the marginal effect of xj on y can be assessed using a correlation coefficient or simple linear regression model relating only xj to y; this effect is the total derivative of y with respect to xj."}, {"text": "In contrast, the marginal effect of xj on y can be assessed using a correlation coefficient or simple linear regression model relating only xj to y; this effect is the total derivative of y with respect to xj."}]}, {"question": "What does likelihood mean in statistics", "positive_ctxs": [{"text": "In statistics, the likelihood function (often simply called the likelihood) measures the goodness of fit of a statistical model to a sample of data for given values of the unknown parameters."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "If \u03b8 is a single real parameter, a p% likelihood region will usually comprise an interval of real values. If the region does comprise an interval, then it is called a likelihood interval.Likelihood intervals, and more generally likelihood regions, are used for interval estimation within likelihoodist statistics: they are similar to confidence intervals in frequentist statistics and credible intervals in Bayesian statistics. Likelihood intervals are interpreted directly in terms of relative likelihood, not in terms of coverage probability (frequentism) or posterior probability (Bayesianism)."}, {"text": "If \u03b8 is a single real parameter, a p% likelihood region will usually comprise an interval of real values. If the region does comprise an interval, then it is called a likelihood interval.Likelihood intervals, and more generally likelihood regions, are used for interval estimation within likelihoodist statistics: they are similar to confidence intervals in frequentist statistics and credible intervals in Bayesian statistics. Likelihood intervals are interpreted directly in terms of relative likelihood, not in terms of coverage probability (frequentism) or posterior probability (Bayesianism)."}]}, {"question": "How do you find the percentile under the normal curve", "positive_ctxs": [{"text": "If you're given the probability (percent) greater than x and you need to find x, you translate this as: Find b where p(X > b) = p (and p is given). Rewrite this as a percentile (less-than) problem: Find b where p(X < b) = 1 \u2013 p. This means find the (1 \u2013 p)th percentile for X."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Many scores are derived from the normal distribution, including percentile ranks (\"percentiles\" or \"quantiles\"), normal curve equivalents, stanines, z-scores, and T-scores. Additionally, some behavioral statistical procedures assume that scores are normally distributed; for example, t-tests and ANOVAs. Bell curve grading assigns relative grades based on a normal distribution of scores."}, {"text": "Many scores are derived from the normal distribution, including percentile ranks (\"percentiles\" or \"quantiles\"), normal curve equivalents, stanines, z-scores, and T-scores. Additionally, some behavioral statistical procedures assume that scores are normally distributed; for example, t-tests and ANOVAs. Bell curve grading assigns relative grades based on a normal distribution of scores."}, {"text": "Many scores are derived from the normal distribution, including percentile ranks (\"percentiles\" or \"quantiles\"), normal curve equivalents, stanines, z-scores, and T-scores. Additionally, some behavioral statistical procedures assume that scores are normally distributed; for example, t-tests and ANOVAs. Bell curve grading assigns relative grades based on a normal distribution of scores."}, {"text": "Many scores are derived from the normal distribution, including percentile ranks (\"percentiles\" or \"quantiles\"), normal curve equivalents, stanines, z-scores, and T-scores. Additionally, some behavioral statistical procedures assume that scores are normally distributed; for example, t-tests and ANOVAs. Bell curve grading assigns relative grades based on a normal distribution of scores."}, {"text": "Many scores are derived from the normal distribution, including percentile ranks (\"percentiles\" or \"quantiles\"), normal curve equivalents, stanines, z-scores, and T-scores. Additionally, some behavioral statistical procedures assume that scores are normally distributed; for example, t-tests and ANOVAs. Bell curve grading assigns relative grades based on a normal distribution of scores."}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}]}, {"question": "What is an independent variable example", "positive_ctxs": [{"text": "Two examples of common independent variables are age and time. They're independent of everything else. The dependent variable (sometimes known as the responding variable) is what is being studied and measured in the experiment. It's what changes as a result of the changes to the independent variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}]}, {"question": "What is decision tree diagram", "positive_ctxs": [{"text": "A decision tree is a flowchart-like diagram that shows the various outcomes from a series of decisions. It can be used as a decision-making tool, for research analysis, or for planning strategy. A primary advantage for using a decision tree is that it is easy to follow and understand."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A decision tree or a classification tree is a tree in which each internal (non-leaf) node is labeled with an input feature. The arcs coming from a node labeled with an input feature are labeled with each of the possible values of the target feature or the arc leads to a subordinate decision node on a different input feature. Each leaf of the tree is labeled with a class or a probability distribution over the classes, signifying that the data set has been classified by the tree into either a specific class, or into a particular probability distribution (which, if the decision tree is well-constructed, is skewed towards certain subsets of classes)."}, {"text": "A decision tree or a classification tree is a tree in which each internal (non-leaf) node is labeled with an input feature. The arcs coming from a node labeled with an input feature are labeled with each of the possible values of the target feature or the arc leads to a subordinate decision node on a different input feature. Each leaf of the tree is labeled with a class or a probability distribution over the classes, signifying that the data set has been classified by the tree into either a specific class, or into a particular probability distribution (which, if the decision tree is well-constructed, is skewed towards certain subsets of classes)."}, {"text": "Rotation forest \u2013 in which every decision tree is trained by first applying principal component analysis (PCA) on a random subset of the input features.A special case of a decision tree is a decision list, which is a one-sided decision tree, so that every internal node has exactly 1 leaf node and exactly 1 internal node as a child (except for the bottommost node, whose only child is a single leaf node). While less expressive, decision lists are arguably easier to understand than general decision trees due to their added sparsity, permit non-greedy learning methods and monotonic constraints to be imposed.Notable decision tree algorithms include:"}, {"text": "Rotation forest \u2013 in which every decision tree is trained by first applying principal component analysis (PCA) on a random subset of the input features.A special case of a decision tree is a decision list, which is a one-sided decision tree, so that every internal node has exactly 1 leaf node and exactly 1 internal node as a child (except for the bottommost node, whose only child is a single leaf node). While less expressive, decision lists are arguably easier to understand than general decision trees due to their added sparsity, permit non-greedy learning methods and monotonic constraints to be imposed.Notable decision tree algorithms include:"}, {"text": "An alternating decision tree (ADTree) is a machine learning method for classification. It generalizes decision trees and has connections to boosting."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "A decision stump is a machine learning model consisting of a one-level decision tree. That is, it is a decision tree with one internal node (the root) which is immediately connected to the terminal nodes (its leaves). A decision stump makes a prediction based on the value of just a single input feature."}]}, {"question": "What is Homoscedasticity in regression analysis", "positive_ctxs": [{"text": "Heteroscedasticity means unequal scatter. In regression analysis, we talk about heteroscedasticity in the context of the residuals or error term. Specifically, heteroscedasticity is a systematic change in the spread of the residuals over the range of measured values."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Regression analysis is primarily used for two conceptually distinct purposes. First, regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Second, in some situations regression analysis can be used to infer causal relationships between the independent and dependent variables."}, {"text": "Regression analysis is primarily used for two conceptually distinct purposes. First, regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Second, in some situations regression analysis can be used to infer causal relationships between the independent and dependent variables."}, {"text": "Regression analysis is primarily used for two conceptually distinct purposes. First, regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Second, in some situations regression analysis can be used to infer causal relationships between the independent and dependent variables."}, {"text": "Regression analysis is primarily used for two conceptually distinct purposes. First, regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Second, in some situations regression analysis can be used to infer causal relationships between the independent and dependent variables."}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}]}, {"question": "Is the universe an isolated system", "positive_ctxs": [{"text": "The universe is considered an isolated system because the energy of the universe is constant. This matches with the definition of an isolated system, which is that energy is not exchanged with the surroundings, thus staying constant."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Assuming that a finite universe is an isolated system, the second law of thermodynamics states that its total entropy is continually increasing. It has been speculated, since the 19th century, that the universe is fated to a heat death in which all the energy ends up as a homogeneous distribution of thermal energy so that no more work can be extracted from any source."}, {"text": "While Clausius based his definition on a reversible process, there are also irreversible processes that change entropy. Following the second law of thermodynamics, entropy of an isolated system always increases for irreversible processes. The difference between an isolated system and closed system is that heat may not flow to and from an isolated system, but heat flow to and from a closed system is possible."}, {"text": "Scientific null assumptions are used to directly advance a theory. For example, the angular momentum of the universe is zero. If not true, the theory of the early universe may need revision."}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "However, as calculated in the example, the entropy of the system of ice and water has increased more than the entropy of the surrounding room has decreased. In an isolated system such as the room and ice water taken together, the dispersal of energy from warmer to cooler always results in a net increase in entropy. Thus, when the \"universe\" of the room and ice water system has reached a temperature equilibrium, the entropy change from the initial state is at a maximum."}, {"text": "A contact equilibrium can exist for each chemical constituent of the system of interest. In a contact equilibrium, despite the possible exchange through the selectively permeable wall, the system of interest is changeless, as if it were in isolated thermodynamic equilibrium. This scheme follows the general rule that \"... we can consider an equilibrium only with respect to specified processes and defined experimental conditions.\""}]}, {"question": "How do you calculate linear regression by hand", "positive_ctxs": [{"text": "Simple Linear Regression Math by HandCalculate average of your X variable.Calculate the difference between each X and the average X.Square the differences and add it all up. Calculate average of your Y variable.Multiply the differences (of X and Y from their respective averages) and add them all together.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Hence, the outcome is either pi or 1 \u2212 pi, as in the previous line.Linear predictor functionThe basic idea of logistic regression is to use the mechanism already developed for linear regression by modeling the probability pi using a linear predictor function, i.e. a linear combination of the explanatory variables and a set of regression coefficients that are specific to the model at hand but the same for all trials."}, {"text": "Hence, the outcome is either pi or 1 \u2212 pi, as in the previous line.Linear predictor functionThe basic idea of logistic regression is to use the mechanism already developed for linear regression by modeling the probability pi using a linear predictor function, i.e. a linear combination of the explanatory variables and a set of regression coefficients that are specific to the model at hand but the same for all trials."}, {"text": "Hence, the outcome is either pi or 1 \u2212 pi, as in the previous line.Linear predictor functionThe basic idea of logistic regression is to use the mechanism already developed for linear regression by modeling the probability pi using a linear predictor function, i.e. a linear combination of the explanatory variables and a set of regression coefficients that are specific to the model at hand but the same for all trials."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}]}, {"question": "What is the purpose of Bayesian analysis", "positive_ctxs": [{"text": "Bayesian analysis is a statistical paradigm that answers research questions about unknown parameters using probability statements."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data."}, {"text": "In statistics, Bayesian linear regression is an approach to linear regression in which the statistical analysis is undertaken within the context of Bayesian inference. When the regression model has errors that have a normal distribution, and if a particular form of prior distribution is assumed, explicit results are available for the posterior probability distributions of the model's parameters."}, {"text": "The term Bayesian derives from the 18th century mathematician and theologian Thomas Bayes, who provided the first mathematical treatment of a non-trivial problem of statistical data analysis using what is now known as Bayesian inference. Mathematician Pierre-Simon Laplace pioneered and popularised what is now called Bayesian probability."}, {"text": "It is correct that the info-gap robustness function is local, and has restricted quantitative value in some cases. However, a major purpose of decision analysis is to provide focus for subjective judgments. That is, regardless of the formal analysis, a framework for discussion is provided."}]}, {"question": "What is validity and reliability in statistics", "positive_ctxs": [{"text": "Reliability refers to the extent that the instrument yields the same results over multiple trials. Validity refers to the extent that the instrument measures what it was designed to measure. Construct validity uses statistical analyses, such as correlations, to verify the relevance of the questions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In science, randomized experiments are the experiments that allow the greatest reliability and validity of statistical estimates of treatment effects. Randomization-based inference is especially important in experimental design and in survey sampling."}, {"text": "In science, randomized experiments are the experiments that allow the greatest reliability and validity of statistical estimates of treatment effects. Randomization-based inference is especially important in experimental design and in survey sampling."}, {"text": "In science, randomized experiments are the experiments that allow the greatest reliability and validity of statistical estimates of treatment effects. Randomization-based inference is especially important in experimental design and in survey sampling."}, {"text": "In psychometrics and psychophysics, the term accuracy is interchangeably used with validity and constant error. Precision is a synonym for reliability and variable error. The validity of a measurement instrument or psychological test is established through experiment or correlation with behavior."}, {"text": "In psychometrics and psychophysics, the term accuracy is interchangeably used with validity and constant error. Precision is a synonym for reliability and variable error. The validity of a measurement instrument or psychological test is established through experiment or correlation with behavior."}, {"text": "Perri and Lichtenwald (2010) provide a starting point for a discussion about a wide range of reliability and validity topics in their analysis of a wrongful murder conviction."}, {"text": "Survival analysis is a branch of statistics for analyzing the expected duration of time until one or more events happen, such as death in biological organisms and failure in mechanical systems. This topic is called reliability theory or reliability analysis in engineering, duration analysis or duration modelling in economics, and event history analysis in sociology. Survival analysis attempts to answer certain questions, such as what is the proportion of a population which will survive past a certain time?"}]}, {"question": "How do you make a statistical model", "positive_ctxs": [{"text": "1:246:12Suggested clip \u00b7 104 secondsBuilding Statistical Models - YouTubeYouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "First, is it a good model? Second, is a more complex model better? Third, what contribution do individual predictors make to the model?"}, {"text": "Example: On a 1-5 scale where 1 means disagree completely and 5 means agree completely, how much do you agree with the following statement. \"The Federal government should do more to help people facing foreclosure on their homes. \"A multinomial discrete-choice model can examine the responses to these questions (model G, model H, model I)."}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "Can naive Bayes be used for regression", "positive_ctxs": [{"text": "Naive Bayes classifier (Russell, & Norvig, 1995) is another feature-based supervised learning algorithm. It was originally intended to be used for classification tasks, but with some modifications it can be used for regression as well (Frank, Trigg, Holmes, & Witten, 2000) ."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For some types of probability models, naive Bayes classifiers can be trained very efficiently in a supervised learning setting. In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the naive Bayes model without accepting Bayesian probability or using any Bayesian methods."}, {"text": "For some types of probability models, naive Bayes classifiers can be trained very efficiently in a supervised learning setting. In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the naive Bayes model without accepting Bayesian probability or using any Bayesian methods."}, {"text": "For some types of probability models, naive Bayes classifiers can be trained very efficiently in a supervised learning setting. In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the naive Bayes model without accepting Bayesian probability or using any Bayesian methods."}, {"text": "For some types of probability models, naive Bayes classifiers can be trained very efficiently in a supervised learning setting. In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the naive Bayes model without accepting Bayesian probability or using any Bayesian methods."}, {"text": "Despite their naive design and apparently oversimplified assumptions, naive Bayes classifiers have worked quite well in many complex real-world situations. In 2004, an analysis of the Bayesian classification problem showed that there are sound theoretical reasons for the apparently implausible efficacy of naive Bayes classifiers. Still, a comprehensive comparison with other classification algorithms in 2006 showed that Bayes classification is outperformed by other approaches, such as boosted trees or random forests.An advantage of naive Bayes is that it only requires a small number of training data to estimate the parameters necessary for classification."}, {"text": "Despite their naive design and apparently oversimplified assumptions, naive Bayes classifiers have worked quite well in many complex real-world situations. In 2004, an analysis of the Bayesian classification problem showed that there are sound theoretical reasons for the apparently implausible efficacy of naive Bayes classifiers. Still, a comprehensive comparison with other classification algorithms in 2006 showed that Bayes classification is outperformed by other approaches, such as boosted trees or random forests.An advantage of naive Bayes is that it only requires a small number of training data to estimate the parameters necessary for classification."}, {"text": "Despite their naive design and apparently oversimplified assumptions, naive Bayes classifiers have worked quite well in many complex real-world situations. In 2004, an analysis of the Bayesian classification problem showed that there are sound theoretical reasons for the apparently implausible efficacy of naive Bayes classifiers. Still, a comprehensive comparison with other classification algorithms in 2006 showed that Bayes classification is outperformed by other approaches, such as boosted trees or random forests.An advantage of naive Bayes is that it only requires a small number of training data to estimate the parameters necessary for classification."}]}, {"question": "How do you find the standardized score", "positive_ctxs": [{"text": "As the formula shows, the standard score is simply the score, minus the mean score, divided by the standard deviation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "Given a set of data that contains information on medical patients your goal is to find correlation for a disease. Before you can start iterating through the data ensure that you have an understanding of the result, are you looking for patients who have the disease? Are there other diseases that can be the cause?"}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "In the example above, the MCC score would be undefined (since TN and FN would be 0, therefore the denominator of Equation 3 would be 0). By checking this value, instead of accuracy and F1 score, you would then be able to notice that your classifier is going in the wrong direction, and you would become aware that there are issues you ought to solve before proceeding."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "How can I learn multitasking", "positive_ctxs": [{"text": "12 Tips to boost your multitasking skillsAccept your limits. To better manage task organization, be aware of your limits, especially those you can't control. Distinguish urgent from important. Learn to concentrate. Avoid distractions. Work in blocks of time. Work on related tasks together. Learn to supervise. Plan ahead.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The likelihood-ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e. on what probability of Type I error is considered tolerable (Type I errors consist of the rejection of a null hypothesis that is true)."}, {"text": "Syntactic or structural ambiguities are frequently found in humor and advertising. One of the most enduring jokes from the famous comedian Groucho Marx was his quip that used a modifier attachment ambiguity: \"I shot an elephant in my pajamas. How he got into my pajamas I don't know.\""}, {"text": "There are many other areas of application for sequence learning. How humans learn sequential procedures has been a long-standing research problem in cognitive science and currently is a major topic in neuroscience. Research work has been going on in several disciplines, including artificial intelligence, neural networks, and engineering."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Looking out my window this lovely spring morning, I see an azalea in full bloom. I don't see that; though that is the only way I can describe what I see. That is a proposition, a sentence, a fact; but what I perceive is not proposition, sentence, fact, but only an image, which I make intelligible in part by means of a statement of fact."}, {"text": "How can one quantify progress? Some of the adopted ways is the reward and punishment. But what kind of reward and what kind of punishment?"}, {"text": "We stated the curse of dimensionality for integration. But exponential dependence on d occurs for almost every continuous problem that has been investigated. How can we try to vanquish the curse?"}]}, {"question": "How do you do a multivariate test", "positive_ctxs": [{"text": "How to conduct a multivariate testIdentify a problem. Formulate a hypothesis. Create variations. Determine your sample size. Test your tools. Start driving traffic. Analyze your results. Learn from your results."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "The errors are usually assumed to be uncorrelated across measurements, and follow a multivariate normal distribution. If the errors do not follow a multivariate normal distribution, generalized linear models may be used to relax assumptions about Y and U."}]}, {"question": "What is the difference between subquery and correlated query", "positive_ctxs": [{"text": "A subquery is a select statement that is embedded in a clause of another select statement. A Correlated subquery is a subquery that is evaluated once for each row processed by the outer query or main query."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": ", and then finds a vertex with the minimal distance value. If the distance value between the query and the selected vertex is smaller than the one between the query and the current element, then the algorithm moves to the selected vertex, and it becomes new enter-point. The algorithm stops when it reaches a local minimum: a vertex whose neighborhood does not contain a vertex that is closer to the query than the vertex itself."}, {"text": "The performance of this algorithm is nearer to logarithmic time than linear time when the query point is near the cloud, because as the distance between the query point and the closest point-cloud point nears zero, the algorithm needs only perform a look-up using the query point as a key to get the correct result."}, {"text": "One of the main properties of the Elastic Net is that it can select groups of correlated variables. The difference between weight vectors of samples"}, {"text": "In psychophysical terms, the size difference between A and C is above the just noticeable difference ('jnd') while the size differences between A and B and B and C are below the jnd."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration."}]}, {"question": "How do you know if your data is normally distributed", "positive_ctxs": [{"text": "Look at normality plots of the data. \u201cNormal Q-Q Plot\u201d provides a graphical way to determine the level of normality. The black line indicates the values your sample should adhere to if the distribution was normal. If the dots fall exactly on the black line, then your data are normal."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Definitions 3\u20136 deal with the unknown eligibility of potential respondents who could not be contacted. For example, there is no answer at the doors of 10 houses you attempted to survey. Maybe 5 of those you already know house people who qualify for your survey based on neighbors telling you whom lived there, but the other 5 are completely unknown."}, {"text": "If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute."}]}, {"question": "Why is the pooling layer used in CNN", "positive_ctxs": [{"text": "A pooling layer is another building block of a CNN. Its function is to progressively reduce the spatial size of the representation to reduce the amount of parameters and computation in the network. Pooling layer operates on each feature map independently. The most common approach used in pooling is max pooling."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}]}, {"question": "How does logistic regression deal with Multicollinearity", "positive_ctxs": [{"text": "How to Deal with MulticollinearityRemove some of the highly correlated independent variables.Linearly combine the independent variables, such as adding them together.Perform an analysis designed for highly correlated variables, such as principal components analysis or partial least squares regression."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In a binary logistic regression model, the dependent variable has two levels (categorical). Outputs with more than two values are modeled by multinomial logistic regression and, if the multiple categories are ordered, by ordinal logistic regression (for example the proportional odds ordinal logistic model). The logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier."}, {"text": "In a binary logistic regression model, the dependent variable has two levels (categorical). Outputs with more than two values are modeled by multinomial logistic regression and, if the multiple categories are ordered, by ordinal logistic regression (for example the proportional odds ordinal logistic model). The logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier."}, {"text": "In a binary logistic regression model, the dependent variable has two levels (categorical). Outputs with more than two values are modeled by multinomial logistic regression and, if the multiple categories are ordered, by ordinal logistic regression (for example the proportional odds ordinal logistic model). The logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier."}, {"text": "Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis."}, {"text": "Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis."}, {"text": "Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis."}, {"text": "In statistics, multicollinearity (also collinearity) is a phenomenon in which one predictor variable in a multiple regression model can be linearly predicted from the others with a substantial degree of accuracy. In this situation, the coefficient estimates of the multiple regression may change erratically in response to small changes in the model or the data. Multicollinearity does not reduce the predictive power or reliability of the model as a whole, at least within the sample data set; it only affects calculations regarding individual predictors."}]}, {"question": "What is the derivative of E X", "positive_ctxs": [{"text": "Derivative RulesCommon FunctionsFunctionDerivativeSquarex22xSquare Root\u221ax(\u00bd)x-\u00bdExponentialexexaxln(a) ax24 more rows"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The random variable E(|Z| | X) is the best predictor of |Z| given X. That is, it minimizes the mean square error E ( |Z| - f(X) )2 on the class of all random variables of the form f(X). Similarly to the discrete case, E ( |Z| | g(X) ) = E ( |Z| | X ) for every measurable function g that is one-to-one on (-1,1)."}, {"text": "The conditional expectation E ( Y | X = 0.5 ) is of little interest; it vanishes just by symmetry. It is more interesting to calculate E ( |Z| | X = 0.5 ) treating |Z| as a function of X, Y:"}, {"text": "Let a random variable X have a probability density f(x;\u03b1). The partial derivative with respect to the (unknown, and to be estimated) parameter \u03b1 of the log likelihood function is called the score. The second moment of the score is called the Fisher information:"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": ", exists and allows for the application of differential calculus. The basic way to maximize a differentiable function is to find the stationary points (the points where the derivative is zero); since the derivative of a sum is just the sum of the derivatives, but the derivative of a product requires the product rule, it is easier to compute the stationary points of the log-likelihood of independent events than for the likelihood of independent events."}, {"text": ", exists and allows for the application of differential calculus. The basic way to maximize a differentiable function is to find the stationary points (the points where the derivative is zero); since the derivative of a sum is just the sum of the derivatives, but the derivative of a product requires the product rule, it is easier to compute the stationary points of the log-likelihood of independent events than for the likelihood of independent events."}, {"text": "The union of M and N is the matroid whose underlying set is the union (not the disjoint union) of E and F, and whose independent sets are those subsets that are the union of an independent set in M and one in N. Usually the term \"union\" is applied when E = F, but that assumption is not essential. If E and F are disjoint, the union is the direct sum."}]}, {"question": "What is the difference between P value and confidence interval", "positive_ctxs": [{"text": "In exploratory studies, p-values enable the recognition of any statistically noteworthy findings. Confidence intervals provide information about a range in which the true value lies with a certain degree of probability, as well as about the direction and strength of the demonstrated effect."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Often they are expressed as 95% confidence intervals. Formally, a 95% confidence interval for a value is a range where, if the sampling and analysis were repeated under the same conditions (yielding a different dataset), the interval would include the true (population) value in 95% of all possible cases. This does not imply that the probability that the true value is in the confidence interval is 95%."}, {"text": "Often they are expressed as 95% confidence intervals. Formally, a 95% confidence interval for a value is a range where, if the sampling and analysis were repeated under the same conditions (yielding a different dataset), the interval would include the true (population) value in 95% of all possible cases. This does not imply that the probability that the true value is in the confidence interval is 95%."}, {"text": "Often they are expressed as 95% confidence intervals. Formally, a 95% confidence interval for a value is a range where, if the sampling and analysis were repeated under the same conditions (yielding a different dataset), the interval would include the true (population) value in 95% of all possible cases. This does not imply that the probability that the true value is in the confidence interval is 95%."}, {"text": "Often they are expressed as 95% confidence intervals. Formally, a 95% confidence interval for a value is a range where, if the sampling and analysis were repeated under the same conditions (yielding a different dataset), the interval would include the true (population) value in 95% of all possible cases. This does not imply that the probability that the true value is in the confidence interval is 95%."}, {"text": "Often they are expressed as 95% confidence intervals. Formally, a 95% confidence interval for a value is a range where, if the sampling and analysis were repeated under the same conditions (yielding a different dataset), the interval would include the true (population) value in 95% of all possible cases. This does not imply that the probability that the true value is in the confidence interval is 95%."}, {"text": "Given a model, likelihood intervals can be compared to confidence intervals. If \u03b8 is a single real parameter, then under certain conditions, a 14.65% likelihood interval (about 1:7 likelihood) for \u03b8 will be the same as a 95% confidence interval (19/20 coverage probability). In a slightly different formulation suited to the use of log-likelihoods (see Wilks' theorem), the test statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately a chi-squared distribution with degrees-of-freedom (df) equal to the difference in df's between the two models (therefore, the e\u22122 likelihood interval is the same as the 0.954 confidence interval; assuming difference in df's to be 1)."}, {"text": "Given a model, likelihood intervals can be compared to confidence intervals. If \u03b8 is a single real parameter, then under certain conditions, a 14.65% likelihood interval (about 1:7 likelihood) for \u03b8 will be the same as a 95% confidence interval (19/20 coverage probability). In a slightly different formulation suited to the use of log-likelihoods (see Wilks' theorem), the test statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately a chi-squared distribution with degrees-of-freedom (df) equal to the difference in df's between the two models (therefore, the e\u22122 likelihood interval is the same as the 0.954 confidence interval; assuming difference in df's to be 1)."}]}, {"question": "What do main effects mean in Anova", "positive_ctxs": [{"text": "In the design of experiments and analysis of variance, a main effect is the effect of an independent variable on a dependent variable averaged across the levels of any other independent variables. Main effects are essentially the overall effect of a factor."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "When measurement variables are employed in interactions, it is often desirable to work with centered versions, where the variable's mean (or some other reasonably central value) is set as zero. Centering makes the main effects in interaction models more interpretable. The coefficient a in the equation above, for example, represents the effect of x1 when x2 equals zero."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Also, the original plan for the main data analyses can and should be specified in more detail or rewritten. In order to do this, several decisions about the main data analyses can and should be made:"}, {"text": "Unweighted effects coding is most appropriate in situations where differences in sample size are the result of incidental factors. The interpretation of b is different for each: in unweighted effects coding b is the difference between the mean of the experimental group and the grand mean, whereas in the weighted situation it is the mean of the experimental group minus the weighted grand mean.In effects coding, we code the group of interest with a 1, just as we would for dummy coding. The principal difference is that we code \u22121 for the group we are least interested in."}, {"text": "Unweighted effects coding is most appropriate in situations where differences in sample size are the result of incidental factors. The interpretation of b is different for each: in unweighted effects coding b is the difference between the mean of the experimental group and the grand mean, whereas in the weighted situation it is the mean of the experimental group minus the weighted grand mean.In effects coding, we code the group of interest with a 1, just as we would for dummy coding. The principal difference is that we code \u22121 for the group we are least interested in."}, {"text": "Unweighted effects coding is most appropriate in situations where differences in sample size are the result of incidental factors. The interpretation of b is different for each: in unweighted effects coding b is the difference between the mean of the experimental group and the grand mean, whereas in the weighted situation it is the mean of the experimental group minus the weighted grand mean.In effects coding, we code the group of interest with a 1, just as we would for dummy coding. The principal difference is that we code \u22121 for the group we are least interested in."}, {"text": "A mean does not just \"smooth\" the data. A mean is a form of low-pass filter. The effects of the particular filter used should be understood in order to make an appropriate choice."}]}, {"question": "What is the difference between probability density function and probability distribution function", "positive_ctxs": [{"text": "A function that represents a discrete probability distribution is called a probability mass function. A function that represents a continuous probability distribution is called a probability density function. Functions that represent probability distributions still have to obey the rules of probability."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}, {"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}, {"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}, {"text": "Continuous probability distributions can be described in several ways. The probability density function describes the infinitesimal probability of any given value, and the probability that the outcome lies in a given interval can be computed by integrating the probability density function over that interval. An alternative description of the distribution is by means of the cumulative distribution function, which describes the probability that the random variable is no larger than a given value (i.e., P(X < x) for some x)."}, {"text": "Continuous probability distributions can be described in several ways. The probability density function describes the infinitesimal probability of any given value, and the probability that the outcome lies in a given interval can be computed by integrating the probability density function over that interval. An alternative description of the distribution is by means of the cumulative distribution function, which describes the probability that the random variable is no larger than a given value (i.e., P(X < x) for some x)."}, {"text": "Continuous probability distributions can be described in several ways. The probability density function describes the infinitesimal probability of any given value, and the probability that the outcome lies in a given interval can be computed by integrating the probability density function over that interval. An alternative description of the distribution is by means of the cumulative distribution function, which describes the probability that the random variable is no larger than a given value (i.e., P(X < x) for some x)."}, {"text": "Continuous probability distributions can be described in several ways. The probability density function describes the infinitesimal probability of any given value, and the probability that the outcome lies in a given interval can be computed by integrating the probability density function over that interval. An alternative description of the distribution is by means of the cumulative distribution function, which describes the probability that the random variable is no larger than a given value (i.e., P(X < x) for some x)."}]}, {"question": "What is the difference between a priori and a posteriori probability", "positive_ctxs": [{"text": "Similar to the distinction in philosophy between a priori and a posteriori, in Bayesian inference a priori denotes general knowledge about the data distribution before making an inference, while a posteriori denotes knowledge that incorporates the results of making an inference."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In the update phase, the current a priori prediction is combined with current observation information to refine the state estimate. This improved estimate is termed the a posteriori state estimate."}, {"text": "An a priori probability is a probability that is derived purely by deductive reasoning. One way of deriving a priori probabilities is the principle of indifference, which has the character of saying that, if there are N mutually exclusive and collectively exhaustive events and if they are equally likely, then the probability of a given event occurring is 1/N. Similarly the probability of one of a given collection of K events is K / N."}, {"text": "The power of the test is the probability that the test will find a statistically significant difference between men and women, as a function of the size of the true difference between those two populations."}, {"text": "In the Rasch model, the probability of a specified response (e.g. right/wrong answer) is modeled as a function of person and item parameters. Specifically, in the original Rasch model, the probability of a correct response is modeled as a logistic function of the difference between the person and item parameter."}, {"text": "When two competing models are a priori considered to be equiprobable, the ratio of their posterior probabilities corresponds to the Bayes factor. Since Bayesian model comparison is aimed on selecting the model with the highest posterior probability, this methodology is also referred to as the maximum a posteriori (MAP) selection rule or the MAP probability rule."}, {"text": "In the case of making a decision between two hypotheses, H1, absent, and H2, present, in the event of a particular observation, y, a classical approach is to choose H1 when p(H1|y) > p(H2|y) and H2 in the reverse case. In the event that the two a posteriori probabilities are equal, one might choose to default to a single choice (either always choose H1 or always choose H2), or might randomly select either H1 or H2. The a priori probabilities of H1 and H2 can guide this choice, e.g."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}]}, {"question": "What is unsupervised learning in neural network", "positive_ctxs": [{"text": "This learning process is independent. During the training of ANN under unsupervised learning, the input vectors of similar type are combined to form clusters. When a new input pattern is applied, then the neural network gives an output response indicating the class to which input pattern belongs."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Similar ideas have been used in feed-forward neural networks for unsupervised pre-training to structure a neural network, making it first learn generally useful feature detectors. Then the network is trained further by supervised backpropagation to classify labeled data. The deep belief network model by Hinton et al."}, {"text": "Similar ideas have been used in feed-forward neural networks for unsupervised pre-training to structure a neural network, making it first learn generally useful feature detectors. Then the network is trained further by supervised backpropagation to classify labeled data. The deep belief network model by Hinton et al."}, {"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}, {"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}, {"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}, {"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}, {"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}]}, {"question": "What is discriminator loss", "positive_ctxs": [{"text": "Critic Loss: D(x) - D(G(z)) The discriminator tries to maximize this function. In other words, it tries to maximize the difference between its output on real instances and its output on fake instances."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The logistic loss is sometimes called cross-entropy loss. It is also known as log loss (In this case, the binary label is often denoted by {-1,+1}).Remark: The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared error loss for Linear regression."}, {"text": "The logistic loss is sometimes called cross-entropy loss. It is also known as log loss (In this case, the binary label is often denoted by {-1,+1}).Remark: The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared error loss for Linear regression."}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "The most common loss function for regression is the square loss function (also known as the L2-norm). This familiar loss function is used in Ordinary Least Squares regression."}]}, {"question": "What do you mean by tensor", "positive_ctxs": [{"text": "Tensors are simply mathematical objects that can be used to describe physical properties, just like scalars and vectors. In fact tensors are merely a generalisation of scalars and vectors; a scalar is a zero rank tensor, and a vector is a first rank tensor."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What is more there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.A simple example of modus ponens often used in introductory logic books is \"If you are human then you are mortal\". This can be represented in pseudocode as:"}, {"text": "You are allowed to select k of these n boxes all at once and break them open simultaneously, gaining access to k keys. What is the probability that using these keys you can open all n boxes, where you use a found key to open the box it belongs to and repeat."}, {"text": "Aspect is unusual in ASL in that transitive verbs derived for aspect lose their transitivity. That is, while you can sign 'dog chew bone' for the dog chewed on a bone, or 'she look-at me' for she looked at me, you cannot do the same in the durative to mean the dog gnawed on the bone or she stared at me. Instead, you must use other strategies, such as a topic construction (see below) to avoid having an object for the verb."}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "It is also possible that no mean exists. Consider a color wheel\u2014there is no mean to the set of all colors. In these situations, you must decide which mean is most useful."}, {"text": "It is also possible that no mean exists. Consider a color wheel\u2014there is no mean to the set of all colors. In these situations, you must decide which mean is most useful."}]}, {"question": "What is localization in image processing", "positive_ctxs": [{"text": "The task of object localization is to predict the object in an image as well as its boundaries. Simply, object localization aims to locate the main (or most visible) object in an image while object detection tries to find out all the objects and their boundaries."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What kind of graph is used depends on the application. For example, in natural language processing, linear chain CRFs are popular, which implement sequential dependencies in the predictions. In image processing the graph typically connects locations to nearby and/or similar locations to enforce that they receive similar predictions."}, {"text": "First face detection is used to segment the face from the image background. In the second step the segmented face image is aligned to account for face pose, image size and photographic properties, such as illumination and grayscale. The purpose of the alignment process is to enable the accurate localization of facial features in the third step, the facial feature extraction."}, {"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}, {"text": "The motion analysis processing can in the simplest case be to detect motion, i.e., find the points in the image where something is moving. More complex types of processing can be to track a specific object in the image over time, to group points that belong to the same rigid object that is moving in the scene, or to determine the magnitude and direction of the motion of every point in the image. The information that is produced is often related to a specific image in the sequence, corresponding to a specific time-point, but then depends also on the neighboring images."}, {"text": "The median filter is a non-linear digital filtering technique, often used to remove noise from an image or signal. Such noise reduction is a typical pre-processing step to improve the results of later processing (for example, edge detection on an image). Median filtering is very widely used in digital image processing because, under certain conditions, it preserves edges while removing noise (but see the discussion below), also having applications in signal processing."}, {"text": "In such situations, the particle filter can give better performance than parametric filters.Another non-parametric approach to Markov localization is the grid-based localization, which uses a histogram to represent the belief distribution. Compared with the grid-based approach, the Monte Carlo localization is more accurate because the state represented in samples is not discretized."}, {"text": "The goals vary from noise removal to feature abstraction. Filtering image data is a standard process used in almost all image processing systems. Nonlinear filters are the most utilized forms of filter construction."}]}, {"question": "How does NLTK sentence Tokenizer work", "positive_ctxs": [{"text": "Tokenization is the process of tokenizing or splitting a string, text into a list of tokens. One can think of token as parts like a word is a token in a sentence, and a sentence is a token in a paragraph. How sent_tokenize works ? The sent_tokenize function uses an instance of PunktSentenceTokenizer from the nltk."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "How much does the ball cost?\" many subjects incorrectly answer $0.10. An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do."}, {"text": "These methods work based on the idea that sentences \"recommend\" other similar sentences to the reader. Thus, if one sentence is very similar to many others, it will likely be a sentence of great importance. The importance of this sentence also stems from the importance of the sentences \"recommending\" it."}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}]}, {"question": "How do you tell if a regression model is a good fit", "positive_ctxs": [{"text": "The best fit line is the one that minimises sum of squared differences between actual and estimated results. Taking average of minimum sum of squared difference is known as Mean Squared Error (MSE). Smaller the value, better the regression model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "First, is it a good model? Second, is a more complex model better? Third, what contribution do individual predictors make to the model?"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "However, it is hard to tell when a tree algorithm should stop because it is impossible to tell if the addition of a single extra node will dramatically decrease error. This problem is known as the horizon effect. A common strategy is to grow the tree until each node contains a small number of instances then use pruning to remove nodes that do not provide additional information.Pruning should reduce the size of a learning tree without reducing predictive accuracy as measured by a cross-validation set."}, {"text": "When data are temporally correlated, straightforward bootstrapping destroys the inherent correlations. This method uses Gaussian process regression (GPR) to fit a probabilistic model from which replicates may then be drawn. GPR is a Bayesian non-linear regression method."}, {"text": "The SRMR is a popular absolute fit indicator. Hu and Bentler (1999) suggested .08 or smaller as a guideline for good fit. Kline (2011) suggested .1 or smaller as a guideline for good fit."}]}, {"question": "How does cross entropy work", "positive_ctxs": [{"text": "The cross-entropy compares the model's prediction with the label which is the true probability distribution. The cross-entropy goes down as the prediction gets more and more accurate. It becomes zero if the prediction is perfect. As such, the cross-entropy can be a loss function to train a classification model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The cross entropy loss is closely related to the Kullback\u2013Leibler divergence between the empirical distribution and the predicted distribution. The cross entropy loss is ubiquitous in modern deep neural networks."}, {"text": "How much does the ball cost?\" many subjects incorrectly answer $0.10. An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do."}, {"text": "The second law of thermodynamics requires that, in general, the total entropy of any system does not decrease other than by increasing the entropy of some other system. Hence, in a system isolated from its environment, the entropy of that system tends not to decrease. It follows that heat cannot flow from a colder body to a hotter body without the application of work to the colder body."}, {"text": "The role of entropy in cosmology remains a controversial subject since the time of Ludwig Boltzmann. Recent work has cast some doubt on the heat death hypothesis and the applicability of any simple thermodynamic model to the universe in general. Although entropy does increase in the model of an expanding universe, the maximum possible entropy rises much more rapidly, moving the universe further from the heat death with time, not closer."}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}]}, {"question": "Which machine learning algorithms require feature scaling", "positive_ctxs": [{"text": "The Machine Learning algorithms that require the feature scaling are mostly KNN (K-Nearest Neighbours), Neural Networks, Linear Regression, and Logistic Regression."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In pattern recognition and machine learning, a feature vector is an n-dimensional vector of numerical features that represent some object. Many algorithms in machine learning require a numerical representation of objects, since such representations facilitate processing and statistical analysis. When representing images, the feature values might correspond to the pixels of an image, while when representing texts the features might be the frequencies of occurrence of textual terms."}, {"text": "In pattern recognition and machine learning, a feature vector is an n-dimensional vector of numerical features that represent some object. Many algorithms in machine learning require a numerical representation of objects, since such representations facilitate processing and statistical analysis. When representing images, the feature values might correspond to the pixels of an image, while when representing texts the features might be the frequencies of occurrence of textual terms."}, {"text": "In a typical machine learning application, practitioners have a set of input data points to train on. The raw data may not be in a form that all algorithms can be applied to it. To make the data amenable for machine learning, an expert may have to apply appropriate data pre-processing, feature engineering, feature extraction, and feature selection methods."}, {"text": "In a typical machine learning application, practitioners have a set of input data points to train on. The raw data may not be in a form that all algorithms can be applied to it. To make the data amenable for machine learning, an expert may have to apply appropriate data pre-processing, feature engineering, feature extraction, and feature selection methods."}, {"text": "In a typical machine learning application, practitioners have a set of input data points to train on. The raw data may not be in a form that all algorithms can be applied to it. To make the data amenable for machine learning, an expert may have to apply appropriate data pre-processing, feature engineering, feature extraction, and feature selection methods."}, {"text": "More specific algorithms are often available as publicly available scripts or third-party add-ons. There are also software packages targeting specific software machine learning applications that specialize in feature extraction."}, {"text": "wordsfish): algorithms that allocate text units into an ideological continuum depending on shared grammatical content. Contrary to supervised scaling methods such as wordscores, methods such as wordfish do not require that the researcher provides samples of extreme ideological texts."}]}, {"question": "What does Inter rater mean", "positive_ctxs": [{"text": "Definition. Inter-rater reliability is the extent to which two or more raters (or observers, coders, examiners) agree. It addresses the issue of consistency of the implementation of a rating system. Inter-rater reliability can be evaluated by using a number of different statistics."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "If the raters tend to agree, the differences between the raters' observations will be near zero. If one rater is usually higher or lower than the other by a consistent amount, the bias will be different from zero. If the raters tend to disagree, but without a consistent pattern of one rating higher than the other, the mean will be near zero."}, {"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}, {"text": "Test-retest reliability assesses the degree to which test scores are consistent from one test administration to the next. Measurements are gathered from a single rater who uses the same methods or instruments and the same testing conditions."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}]}, {"question": "How do you know if something is independent in probability", "positive_ctxs": [{"text": "Events A and B are independent if the equation P(A\u2229B) = P(A) \u00b7 P(B) holds true. You can use the equation to check if events are independent; multiply the probabilities of the two events together to see if they equal the probability of them both happening together."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Economist Paul Krugman agrees mostly with the Rawlsian approach in that he would like to \"create the society each of us would want if we didn\u2019t know in advance who we\u2019d be\". Krugman elaborated: \"If you admit that life is unfair, and that there's only so much you can do about that at the starting line, then you can try to ameliorate the consequences of that unfairness\"."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "The following question was posed to Jeff Hawkins in September 2011 with regard to cortical learning algorithms: \"How do you know if the changes you are making to the model are good or not?\" To which Jeff's response was \"There are two categories for the answer: one is to look at neuroscience, and the other is methods for machine intelligence. In the neuroscience realm, there are many predictions that we can make, and those can be tested."}, {"text": "Suppose the police officers then stop a driver at random to administer a breathalyzer test. It indicates that the driver is drunk. We assume you do not know anything else about them."}, {"text": "If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute."}, {"text": "For example, consider what happens when a person is shown a color swatch and identifies it, saying \"it's red\". The easy problem only requires understanding the machinery in the brain that makes it possible for a person to know that the color swatch is red. The hard problem is that people also know something else\u2014they also know what red looks like."}]}, {"question": "What is an indicator random variable", "positive_ctxs": [{"text": "An indicator random variable is a special kind of random variable associated with the occurence of an event. The indicator random variable IA associated with event A has value 1 if event A occurs and has value 0 otherwise. In other words, IA maps all outcomes in the set A to 1 and all outcomes outside A to 0."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "One type of sample mean is the mean of an indicator variable, which takes on the value 1 for true and the value 0 for false. The mean of such a variable is equal to the proportion that has the variable equal to one (both in the population and in any sample). This is a useful property of indicator variables, especially for hypothesis testing."}, {"text": "One type of sample mean is the mean of an indicator variable, which takes on the value 1 for true and the value 0 for false. The mean of such a variable is equal to the proportion that has the variable equal to one (both in the population and in any sample). This is a useful property of indicator variables, especially for hypothesis testing."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "In other words, the probability that a random variable assumes a value depends on its immediate neighboring random variables. The probability of a random variable in an MRF is given by"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "A mixed random variable is a random variable whose cumulative distribution function is neither piecewise-constant (a discrete random variable) nor everywhere-continuous. It can be realized as the sum of a discrete random variable and a continuous random variable; in which case the CDF will be the weighted average of the CDFs of the component variables.An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, X = \u22121; otherwise X = the value of the spinner as in the preceding example."}, {"text": "A mixed random variable is a random variable whose cumulative distribution function is neither piecewise-constant (a discrete random variable) nor everywhere-continuous. It can be realized as the sum of a discrete random variable and a continuous random variable; in which case the CDF will be the weighted average of the CDFs of the component variables.An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, X = \u22121; otherwise X = the value of the spinner as in the preceding example."}]}, {"question": "Why is the coefficient of variation a better risk measure to", "positive_ctxs": [{"text": "The coefficient of variation is a better risk measure than the standard deviation alone because the CV adjusts for the size of the project. The CV measures the standard deviation divided by the mean and therefore puts the standard deviation into context."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In probability theory and statistics, the coefficient of variation (CV), also known as relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequency distribution. It is often expressed as a percentage, and is defined as the ratio of the standard deviation"}, {"text": "One measure of the statistical risk of a continuous variable, such as the return on an investment, is simply the estimated variance of the variable, or equivalently the square root of the variance, called the standard deviation. Another measure in finance, one which views upside risk as unimportant compared to downside risk, is the downside beta. In the context of a binary variable, a simple statistical measure of risk is simply the probability that a variable will take on the lower of two values."}, {"text": "When normalizing by the mean value of the measurements, the term coefficient of variation of the RMSD, CV(RMSD) may be used to avoid ambiguity. This is analogous to the coefficient of variation with the RMSD taking the place of the standard deviation."}, {"text": "When normalizing by the mean value of the measurements, the term coefficient of variation of the RMSD, CV(RMSD) may be used to avoid ambiguity. This is analogous to the coefficient of variation with the RMSD taking the place of the standard deviation."}, {"text": "In education, it has been used as a measure of the inequality of universities. In chemistry it has been used to express the selectivity of protein kinase inhibitors against a panel of kinases. In engineering, it has been used to evaluate the fairness achieved by Internet routers in scheduling packet transmissions from different flows of traffic.The Gini coefficient is sometimes used for the measurement of the discriminatory power of rating systems in credit risk management.A 2005 study accessed US census data to measure home computer ownership, and used the Gini coefficient to measure inequalities amongst whites and African Americans."}, {"text": "In modeling, a variation of the CV is the CV(RMSD). Essentially the CV(RMSD) replaces the standard deviation term with the Root Mean Square Deviation (RMSD). While many natural processes indeed show a correlation between the average value and the amount of variation around it, accurate sensor devices need to be designed in such a way that the coefficient of variation is close to zero, i.e., yielding a constant absolute error over their working range."}, {"text": "The coefficient of variation fulfills the requirements for a measure of economic inequality. If x (with entries xi) is a list of the values of an economic indicator (e.g. wealth), with xi being the wealth of agent i, then the following requirements are met:"}]}, {"question": "Which classification algorithms is easiest to start with for prediction", "positive_ctxs": [{"text": "1 \u2014 Linear Regression. 2 \u2014 Logistic Regression. 3 \u2014 Linear Discriminant Analysis. 4 \u2014 Classification and Regression Trees. 5 \u2014 Naive Bayes. 6 \u2014 K-Nearest Neighbors. 7 \u2014 Learning Vector Quantization. 8 \u2014 Support Vector Machines.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Many optimization algorithms need to start from a feasible point. One way to obtain such a point is to relax the feasibility conditions using a slack variable; with enough slack, any starting point is feasible. Then, minimize that slack variable until the slack is null or negative."}, {"text": "Decision tree is used as prediction models for classification and data fitting. The decision tree structure can be used to generate rules able to classify or predict target/class/label variable based on the observation attributes."}, {"text": "Decision tree is used as prediction models for classification and data fitting. The decision tree structure can be used to generate rules able to classify or predict target/class/label variable based on the observation attributes."}, {"text": "Decision tree is used as prediction models for classification and data fitting. The decision tree structure can be used to generate rules able to classify or predict target/class/label variable based on the observation attributes."}, {"text": "Decision tree is used as prediction models for classification and data fitting. The decision tree structure can be used to generate rules able to classify or predict target/class/label variable based on the observation attributes."}, {"text": "The A* search algorithm is an example of a best-first search algorithm, as is B*. Best-first algorithms are often used for path finding in combinatorial search. Neither A* nor B* is a greedy best-first search, as they incorporate the distance from the start in addition to estimated distances to the goal."}, {"text": "In computer science, a sequential algorithm or serial algorithm is an algorithm that is executed sequentially \u2013 once through, from start to finish, without other processing executing \u2013 as opposed to concurrently or in parallel. The term is primarily used to contrast with concurrent algorithm or parallel algorithm; most standard computer algorithms are sequential algorithms, and not specifically identified as such, as sequentialness is a background assumption. Concurrency and parallelism are in general distinct concepts, but they often overlap \u2013 many distributed algorithms are both concurrent and parallel \u2013 and thus \"sequential\" is used to contrast with both, without distinguishing which one."}]}, {"question": "How Principal component analysis is used for feature selection", "positive_ctxs": [{"text": "A feature selection method is proposed to select a subset of variables in principal component analysis (PCA) that preserves as much information present in the complete data as possible. The information is measured by means of the percentage of consensus in generalised Procrustes analysis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Perhaps the most widely used algorithm for manifold learning is kernel PCA. It is a combination of Principal component analysis and the kernel trick. PCA begins by computing the covariance matrix of the"}, {"text": "Principal component analysis can be employed in a nonlinear way by means of the kernel trick. The resulting technique is capable of constructing nonlinear mappings that maximize the variance in the data. The resulting technique is entitled kernel PCA."}, {"text": "Principal component analysis can be employed in a nonlinear way by means of the kernel trick. The resulting technique is capable of constructing nonlinear mappings that maximize the variance in the data. The resulting technique is entitled kernel PCA."}, {"text": "During the process of extracting the discriminative features prior to the clustering, Principal component analysis (PCA), though commonly used, is not a necessarily discriminative approach. In contrast, LDA is a discriminative one. Linear discriminant analysis (LDA), provides an efficient way of eliminating the disadvantage we list above."}, {"text": "Principal component analysis is used as a means of dimensionality reduction in the study of large data sets, such as those encountered in bioinformatics. In Q methodology, the eigenvalues of the correlation matrix determine the Q-methodologist's judgment of practical significance (which differs from the statistical significance of hypothesis testing; cf. criteria for determining the number of factors)."}, {"text": "Principal component analysis (PCA) is a widely used method for factor extraction, which is the first phase of EFA. Factor weights are computed to extract the maximum possible variance, with successive factoring continuing until there is no further meaningful variance left. The factor model must then be rotated for analysis.Canonical factor analysis, also called Rao's canonical factoring, is a different method of computing the same model as PCA, which uses the principal axis method."}, {"text": "Principal component regression (PCR) is used when the number of predictor variables is large, or when strong correlations exist among the predictor variables. This two-stage procedure first reduces the predictor variables using principal component analysis then uses the reduced variables in an OLS regression fit. While it often works well in practice, there is no general theoretical reason that the most informative linear function of the predictor variables should lie among the dominant principal components of the multivariate distribution of the predictor variables."}]}, {"question": "Why does face verification identification usally use cosine similarity", "positive_ctxs": [{"text": "The task boils down to computing the distance between two face vectors. As such, appropriate distance metrics are essential for face verification accuracy. The use of cosine similarity in our method leads to an effective learning algorithm which can improve the generalization ability of any given metric."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "As all vectors under consideration by this model are element wise nonnegative, a cosine value of zero means that the query and document vector are orthogonal and have no match (i.e. the query term does not exist in the document being considered). See cosine similarity for further information."}, {"text": "A soft cosine or (\"soft\" similarity) between two vectors considers similarities between pairs of features. The traditional cosine similarity considers the vector space model (VSM) features as independent or completely different, while the soft cosine measure proposes considering the similarity of features in VSM, which help generalize the concept of cosine (and soft cosine) as well as the idea of (soft) similarity."}, {"text": "A soft cosine or (\"soft\" similarity) between two vectors considers similarities between pairs of features. The traditional cosine similarity considers the vector space model (VSM) features as independent or completely different, while the soft cosine measure proposes considering the similarity of features in VSM, which help generalize the concept of cosine (and soft cosine) as well as the idea of (soft) similarity."}, {"text": "It is thus a judgment of orientation and not magnitude: two vectors with the same orientation have a cosine similarity of 1, two vectors oriented at 90\u00b0 relative to each other have a similarity of 0, and two vectors diametrically opposed have a similarity of -1, independent of their magnitude. The cosine similarity is particularly used in positive space, where the outcome is neatly bounded in"}, {"text": "It is thus a judgment of orientation and not magnitude: two vectors with the same orientation have a cosine similarity of 1, two vectors oriented at 90\u00b0 relative to each other have a similarity of 0, and two vectors diametrically opposed have a similarity of -1, independent of their magnitude. The cosine similarity is particularly used in positive space, where the outcome is neatly bounded in"}, {"text": "Identification \u2013 an individual instance of an object is recognized. Examples include identification of a specific person's face or fingerprint, identification of handwritten digits, or identification of a specific vehicle."}, {"text": "Identification \u2013 an individual instance of an object is recognized. Examples include identification of a specific person's face or fingerprint, identification of handwritten digits, or identification of a specific vehicle."}]}, {"question": "Where are machine learning models stored", "positive_ctxs": [{"text": "When dealing with Machine Learning models, it is usually recommended that you store them somewhere. At the private sector, you oftentimes train them and store them before production, while in research and for future model tuning it is a good idea to store them locally."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "\".Modern day machine learning has two objectives, one is to classify data based on models which have been developed, the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. Where as, a machine learning algorithm for stock trading may inform the trader of future potential predictions."}, {"text": "\".Modern day machine learning has two objectives, one is to classify data based on models which have been developed, the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. Where as, a machine learning algorithm for stock trading may inform the trader of future potential predictions."}, {"text": "\".Modern day machine learning has two objectives, one is to classify data based on models which have been developed, the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. Where as, a machine learning algorithm for stock trading may inform the trader of future potential predictions."}, {"text": "\".Modern day machine learning has two objectives, one is to classify data based on models which have been developed, the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. Where as, a machine learning algorithm for stock trading may inform the trader of future potential predictions."}, {"text": "\".Modern day machine learning has two objectives, one is to classify data based on models which have been developed, the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. Where as, a machine learning algorithm for stock trading may inform the trader of future potential predictions."}, {"text": "\".Modern day machine learning has two objectives, one is to classify data based on models which have been developed, the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. Where as, a machine learning algorithm for stock trading may inform the trader of future potential predictions."}, {"text": "\".Modern day machine learning has two objectives, one is to classify data based on models which have been developed, the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. Where as, a machine learning algorithm for stock trading may inform the trader of future potential predictions."}]}, {"question": "Is linear discriminant analysis machine learning", "positive_ctxs": [{"text": "Logistic regression is a classification algorithm traditionally limited to only two-class classification problems. If you have more than two classes then Linear Discriminant Analysis is the preferred linear classification technique."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Homoscedastic distributions are especially useful to derive statistical pattern recognition and machine learning algorithms. One popular example of an algorithm that assumes homoscedasticity is Fisher's linear discriminant analysis."}, {"text": "Homoscedastic distributions are especially useful to derive statistical pattern recognition and machine learning algorithms. One popular example of an algorithm that assumes homoscedasticity is Fisher's linear discriminant analysis."}]}, {"question": "What is E in P value", "positive_ctxs": [{"text": "A significant result indicates that your data are significantly heteroscedastic, and thus the assumption of homoscedasticity in the regression residuals is violated. In your case the data violate the assumption of homoscedasticity, as your p value is 8.6\u22c510\u221228. The e is standard scientific notation for powers of 10."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present), P is given by a Taylor series in E whose coefficients are the nonlinear susceptibilities:"}, {"text": "To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present), P is given by a Taylor series in E whose coefficients are the nonlinear susceptibilities:"}, {"text": "Thus, P ( Y \u2264 1/3 | X ) = g1 (X). The expectation of this random variable is equal to the (unconditional) probability, E ( P ( Y \u2264 1/3 | X ) ) = P ( Y \u2264 1/3 ), namely,"}, {"text": "of a list of N ordered values (sorted from least to greatest) is the smallest value in the list such that no more than P percent of the data is strictly less than the value and at least P percent of the data is less than or equal to that value. This is obtained by first calculating the ordinal rank and then taking the value from the ordered list that corresponds to that rank. The ordinal rank n is calculated using this formula"}, {"text": "Conditional probability may be treated as a special case of conditional expectation. Namely, P ( A | X ) = E ( Y | X ) if Y is the indicator of A. Therefore the conditional probability also depends on the partition \u03b1X generated by X rather than on X itself; P ( A | g(X) ) = P (A | X) = P (A | \u03b1), \u03b1 = \u03b1X = \u03b1g(X)."}, {"text": "The union of M and N is the matroid whose underlying set is the union (not the disjoint union) of E and F, and whose independent sets are those subsets that are the union of an independent set in M and one in N. Usually the term \"union\" is applied when E = F, but that assumption is not essential. If E and F are disjoint, the union is the direct sum."}, {"text": "If P is a program which outputs a string x, then P is a description of x. The length of the description is just the length of P as a character string, multiplied by the number of bits in a character (e.g., 7 for ASCII)."}]}, {"question": "What is the meaning of latent variable", "positive_ctxs": [{"text": "A latent variable is a variable that cannot be observed. The presence of latent variables, however, can be detected by their effects on variables that are observable. Most constructs in research are latent variables. Because measurement error is by definition unique variance, it is not captured in the latent variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, a latent class model (LCM) relates a set of observed (usually discrete) multivariate variables to a set of latent variables. It is a type of latent variable model. It is called a latent class model because the latent variable is discrete."}, {"text": "In statistics, a latent class model (LCM) relates a set of observed (usually discrete) multivariate variables to a set of latent variables. It is a type of latent variable model. It is called a latent class model because the latent variable is discrete."}, {"text": "In this way, an interpretation provides semantic meaning to the terms, the predicates, and formulas of the language. The study of the interpretations of formal languages is called formal semantics. What follows is a description of the standard or Tarskian semantics for first-order logic."}, {"text": "It is shown that method of moments (tensor decomposition techniques) consistently recover the parameters of a large class of latent variable models under some assumptions.The Expectation\u2013maximization algorithm (EM) is also one of the most practical methods for learning latent variable models. However, it can get stuck in local optima, and it is not guaranteed that the algorithm will converge to the true unknown parameters of the model. In contrast, for the method of moments, the global convergence is guaranteed under some conditions."}, {"text": "It is shown that method of moments (tensor decomposition techniques) consistently recover the parameters of a large class of latent variable models under some assumptions.The Expectation\u2013maximization algorithm (EM) is also one of the most practical methods for learning latent variable models. However, it can get stuck in local optima, and it is not guaranteed that the algorithm will converge to the true unknown parameters of the model. In contrast, for the method of moments, the global convergence is guaranteed under some conditions."}, {"text": "It is shown that method of moments (tensor decomposition techniques) consistently recover the parameters of a large class of latent variable models under some assumptions.The Expectation\u2013maximization algorithm (EM) is also one of the most practical methods for learning latent variable models. However, it can get stuck in local optima, and it is not guaranteed that the algorithm will converge to the true unknown parameters of the model. In contrast, for the method of moments, the global convergence is guaranteed under some conditions."}, {"text": "By introducing the latent variable, independence is restored in the sense that within classes variables are independent (local independence). We then say that the association between the observed variables is explained by the classes of the latent variable (McCutcheon, 1987)."}]}, {"question": "Why we use Ancova instead of Anova", "positive_ctxs": [{"text": "ANOVA is used to compare and contrast the means of two or more populations. ANCOVA is used to compare one variable in two or more populations while considering other variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "\"The art of a right decision: Why decision makers want to know the odds-algorithm.\" Newsletter of the European Mathematical Society, Issue 62, 14\u201320, (2006)"}, {"text": "Here a, b, and k are parameters. This distribution arises from the construction of a system of discrete distributions similar to that of the Pearson distributions for continuous distributions.One can generate Student-t samples by taking the ratio of variables from the normal distribution and the square-root of \u03c72-distribution. If we use instead of the normal distribution, e.g., the Irwin\u2013Hall distribution, we obtain over-all a symmetric 4-parameter distribution, which includes the normal, the uniform, the triangular, the Student-t and the Cauchy distribution."}, {"text": "As an example, assume we are interested in the average (or mean) height of people worldwide. We cannot measure all the people in the global population, so instead we sample only a tiny part of it, and measure that. Assume the sample is of size N; that is, we measure the heights of N individuals."}, {"text": "Akaike information criterion (AIC) method of model selection, and a comparison with MML: Dowe, D.L. ; Gardner, S.; Oppy, G. (Dec 2007). Why Simplicity is no Problem for Bayesians\"."}, {"text": "Small samples are somewhat more likely to underestimate the population standard deviation and have a mean that differs from the true population mean, and the Student t-distribution accounts for the probability of these events with somewhat heavier tails compared to a Gaussian. To estimate the standard error of a Student t-distribution it is sufficient to use the sample standard deviation \"s\" instead of \u03c3, and we could use this value to calculate confidence intervals."}, {"text": "Small samples are somewhat more likely to underestimate the population standard deviation and have a mean that differs from the true population mean, and the Student t-distribution accounts for the probability of these events with somewhat heavier tails compared to a Gaussian. To estimate the standard error of a Student t-distribution it is sufficient to use the sample standard deviation \"s\" instead of \u03c3, and we could use this value to calculate confidence intervals."}, {"text": "Small samples are somewhat more likely to underestimate the population standard deviation and have a mean that differs from the true population mean, and the Student t-distribution accounts for the probability of these events with somewhat heavier tails compared to a Gaussian. To estimate the standard error of a Student t-distribution it is sufficient to use the sample standard deviation \"s\" instead of \u03c3, and we could use this value to calculate confidence intervals."}]}, {"question": "What is the ith order statistic", "positive_ctxs": [{"text": "The ith order statistic of a set of n elements is the ith smallest element. For example, the minimum of a set of elements is the first order statistic (i = 1), and the maximum is the nth order statistic (i = n). A median, informally, is the \"halfway point\" of the set."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In the formula above we consider n observations of one dependent variable and p independent variables. Thus, Yi is the ith observation of the dependent variable, Xij is ith observation of the jth independent variable, j = 1, 2, ..., p. The values \u03b2j represent parameters to be estimated, and \u03b5i is the ith independent identically distributed normal error."}, {"text": "In the formula above we consider n observations of one dependent variable and p independent variables. Thus, Yi is the ith observation of the dependent variable, Xij is ith observation of the jth independent variable, j = 1, 2, ..., p. The values \u03b2j represent parameters to be estimated, and \u03b5i is the ith independent identically distributed normal error."}, {"text": "In the formula above we consider n observations of one dependent variable and p independent variables. Thus, Yi is the ith observation of the dependent variable, Xij is ith observation of the jth independent variable, j = 1, 2, ..., p. The values \u03b2j represent parameters to be estimated, and \u03b5i is the ith independent identically distributed normal error."}, {"text": "In the formula above we consider n observations of one dependent variable and p independent variables. Thus, Yi is the ith observation of the dependent variable, Xij is ith observation of the jth independent variable, j = 1, 2, ..., p. The values \u03b2j represent parameters to be estimated, and \u03b5i is the ith independent identically distributed normal error."}, {"text": "In the formula above we consider n observations of one dependent variable and p independent variables. Thus, Yi is the ith observation of the dependent variable, Xij is ith observation of the jth independent variable, j = 1, 2, ..., p. The values \u03b2j represent parameters to be estimated, and \u03b5i is the ith independent identically distributed normal error."}, {"text": "In the formula above we consider n observations of one dependent variable and p independent variables. Thus, Yi is the ith observation of the dependent variable, Xij is ith observation of the jth independent variable, j = 1, 2, ..., p. The values \u03b2j represent parameters to be estimated, and \u03b5i is the ith independent identically distributed normal error."}, {"text": "In the formula above we consider n observations of one dependent variable and p independent variables. Thus, Yi is the ith observation of the dependent variable, Xij is ith observation of the jth independent variable, j = 1, 2, ..., p. The values \u03b2j represent parameters to be estimated, and \u03b5i is the ith independent identically distributed normal error."}]}, {"question": "What are the differences between a convolutional network and a feedforward neural network", "positive_ctxs": [{"text": "Convolution neural network is a type of neural network which has some or all convolution layers. Feed forward neural network is a network which is not recursive. neurons in this layer were only connected to neurons in the next layer. neurons in this layer were only connected to neurons in the next layer."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "LeNet is a convolutional neural network structure proposed by Yann LeCun et al. In general, LeNet refers to lenet-5 and is a simple convolutional neural network. Convolutional neural networks are a kind of feed-forward neural network whose artificial neurons can respond to a part of the surrounding cells in the coverage range and perform well in large-scale image processing."}, {"text": "A feedforward neural network is an artificial neural network wherein connections between the nodes do not form a cycle. As such, it is different from its descendant: recurrent neural networks."}, {"text": "A feedforward neural network is an artificial neural network wherein connections between the nodes do not form a cycle. As such, it is different from its descendant: recurrent neural networks."}, {"text": "The DeepMind system used a deep convolutional neural network, with layers of tiled convolutional filters to mimic the effects of receptive fields. Reinforcement learning is unstable or divergent when a nonlinear function approximator such as a neural network is used to represent Q. This instability comes from the correlations present in the sequence of observations, the fact that small updates to Q may significantly change the policy and the data distribution, and the correlations between Q and the target values."}, {"text": "A network function associated with a neural network characterizes the relationship between input and output layers, which is parameterized by the weights. With appropriately defined network functions, various learning tasks can be performed by minimizing a cost function over the network function (weights)."}, {"text": "A network function associated with a neural network characterizes the relationship between input and output layers, which is parameterized by the weights. With appropriately defined network functions, various learning tasks can be performed by minimizing a cost function over the network function (weights)."}, {"text": "A network function associated with a neural network characterizes the relationship between input and output layers, which is parameterized by the weights. With appropriately defined network functions, various learning tasks can be performed by minimizing a cost function over the network function (weights)."}]}, {"question": "What is the use of Markov chain", "positive_ctxs": [{"text": "Markov chains are an important concept in stochastic processes. They can be used to greatly simplify processes that satisfy the Markov property, namely that the future state of a stochastic variable is only dependent on its present state."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "An example use of a Markov chain is Markov chain Monte Carlo, which uses the Markov property to prove that a particular method for performing a random walk will sample from the joint distribution."}, {"text": "Note that there is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes. Usually the term \"Markov chain\" is reserved for a process with a discrete set of times, that is, a discrete-time Markov chain (DTMC), but a few authors use the term \"Markov process\" to refer to a continuous-time Markov chain (CTMC) without explicit mention. In addition, there are other extensions of Markov processes that are referred to as such but do not necessarily fall within any of these four categories (see Markov model)."}, {"text": "A Markov chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC)."}, {"text": "A Markov chain is a type of Markov process that has either a discrete state space or a discrete index set (often representing time), but the precise definition of a Markov chain varies. For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space (thus regardless of the nature of time), but it is also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space)."}, {"text": "Markov chain models have been used in advanced baseball analysis since 1960, although their use is still rare. Each half-inning of a baseball game fits the Markov chain state when the number of runners and outs are considered. During any at-bat, there are 24 possible combinations of number of outs and position of the runners."}, {"text": "Markov sources are commonly used in communication theory, as a model of a transmitter. Markov sources also occur in natural language processing, where they are used to represent hidden meaning in a text. Given the output of a Markov source, whose underlying Markov chain is unknown, the task of solving for the underlying chain is undertaken by the techniques of hidden Markov models, such as the Viterbi algorithm."}, {"text": "In contrast to traditional Markov chain Monte Carlo methods, the precision parameter of this class of interacting Markov chain Monte Carlo samplers is only related to the number of interacting Markov chain Monte Carlo samplers. These advanced particle methodologies belong to the class of Feynman-Kac particle models, also called Sequential Monte Carlo or particle filter methods in Bayesian inference and signal processing communities. Interacting Markov chain Monte Carlo methods can also be interpreted as a mutation-selection genetic particle algorithm with Markov chain Monte Carlo mutations."}]}, {"question": "What is CRF NLP", "positive_ctxs": [{"text": "Conditional Random Fields (CRF) CRF is a discriminant model for sequences data similar to MEMM. It models the dependency between each state and the entire input sequences. Unlike MEMM, CRF overcomes the label bias issue by using global normalizer."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "For general graphs, the problem of exact inference in CRFs is intractable. The inference problem for a CRF is basically the same as for an MRF and the same arguments hold."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "from one finite set of labels Y. Instead of directly modeling P(y|x) as an ordinary linear-chain CRF would do, a set of latent variables h is \"inserted\" between x and y using the chain rule of probability:"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}]}, {"question": "What is 2 standard deviations from the mean", "positive_ctxs": [{"text": "68% of the data is within 1 standard deviation (\u03c3) of the mean (\u03bc), 95% of the data is within 2 standard deviations (\u03c3) of the mean (\u03bc), and 99.7% of the data is within 3 standard deviations (\u03c3) of the mean (\u03bc)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Symmetry of the distribution decreases the inequality's bounds by a factor of 2 while unimodality sharpens the bounds by a factor of 4/9.Because the mean and the mode in a unimodal distribution differ by at most \u221a3 standard deviations at most 5% of a symmetrical unimodal distribution lies outside (2\u221a10 + 3\u221a3)/3 standard deviations of the mean (approximately 3.840 standard deviations). This is sharper than the bounds provided by the Chebyshev inequality (approximately 4.472 standard deviations)."}, {"text": "An observation is rarely more than a few standard deviations away from the mean. Chebyshev's inequality ensures that, for all distributions for which the standard deviation is defined, the amount of data within a number of standard deviations of the mean is at least as much as given in the following table."}, {"text": "An observation is rarely more than a few standard deviations away from the mean. Chebyshev's inequality ensures that, for all distributions for which the standard deviation is defined, the amount of data within a number of standard deviations of the mean is at least as much as given in the following table."}, {"text": "Hence the conditional expected value of Y, given that X is t standard deviations above its mean (and that includes the case where it's below its mean, when t < 0), is rt standard deviations above the mean of Y. Since |r| \u2264 1, Y is no farther from the mean than X is, as measured in the number of standard deviations.Hence, if 0 \u2264 r < 1, then (X, Y) shows regression toward the mean (by this definition)."}, {"text": "In statistics, the standard score is the number of standard deviations by which the value of a raw score (i.e., an observed value or data point) is above or below the mean value of what is being observed or measured. Raw scores above the mean have positive standard scores, while those below the mean have negative standard scores."}, {"text": "In statistics, the standard score is the number of standard deviations by which the value of a raw score (i.e., an observed value or data point) is above or below the mean value of what is being observed or measured. Raw scores above the mean have positive standard scores, while those below the mean have negative standard scores."}, {"text": "In statistics, the standard score is the number of standard deviations by which the value of a raw score (i.e., an observed value or data point) is above or below the mean value of what is being observed or measured. Raw scores above the mean have positive standard scores, while those below the mean have negative standard scores."}]}, {"question": "What is CalibratedClassifierCV", "positive_ctxs": [{"text": "In scikit-learn we can use the CalibratedClassifierCV class to create well calibrated predicted probabilities using k-fold cross-validation. In CalibratedClassifierCV the training sets are used to train the model and the test sets is used to calibrate the predicted probabilities."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}, {"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}]}, {"question": "How do you classify images in machine learning", "positive_ctxs": [{"text": "Different classifiers are then added on top of this feature extractor to classify images.Support Vector Machines. It is a supervised machine learning algorithm used for both regression and classification problems. Decision Trees. K Nearest Neighbor. Artificial Neural Networks. Convolutional Neural Networks."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Suppose, for example, you have a very imbalanced validation set made of 100 elements, 95 of which are positive elements, and only 5 are negative elements (as explained in Tip 5). And suppose also you made some mistakes in designing and training your machine learning classifier, and now you have an algorithm which always predicts positive. Imagine that you are not aware of this issue."}, {"text": "\".Modern day machine learning has two objectives, one is to classify data based on models which have been developed, the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. Where as, a machine learning algorithm for stock trading may inform the trader of future potential predictions."}, {"text": "\".Modern day machine learning has two objectives, one is to classify data based on models which have been developed, the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. Where as, a machine learning algorithm for stock trading may inform the trader of future potential predictions."}, {"text": "\".Modern day machine learning has two objectives, one is to classify data based on models which have been developed, the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. Where as, a machine learning algorithm for stock trading may inform the trader of future potential predictions."}, {"text": "\".Modern day machine learning has two objectives, one is to classify data based on models which have been developed, the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. Where as, a machine learning algorithm for stock trading may inform the trader of future potential predictions."}, {"text": "\".Modern day machine learning has two objectives, one is to classify data based on models which have been developed, the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. Where as, a machine learning algorithm for stock trading may inform the trader of future potential predictions."}]}, {"question": "What is a good ROC score", "positive_ctxs": [{"text": "The AUC value lies between 0.5 to 1 where 0.5 denotes a bad classifer and 1 denotes an excellent classifier."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If a standard score is applied to the ROC curve, the curve will be transformed into a straight line. This z-score is based on a normal distribution with a mean of zero and a standard deviation of one. In memory strength theory, one must assume that the zROC is not only linear, but has a slope of 1.0."}, {"text": "If a standard score is applied to the ROC curve, the curve will be transformed into a straight line. This z-score is based on a normal distribution with a mean of zero and a standard deviation of one. In memory strength theory, one must assume that the zROC is not only linear, but has a slope of 1.0."}, {"text": "If a standard score is applied to the ROC curve, the curve will be transformed into a straight line. This z-score is based on a normal distribution with a mean of zero and a standard deviation of one. In memory strength theory, one must assume that the zROC is not only linear, but has a slope of 1.0."}, {"text": "If a standard score is applied to the ROC curve, the curve will be transformed into a straight line. This z-score is based on a normal distribution with a mean of zero and a standard deviation of one. In memory strength theory, one must assume that the zROC is not only linear, but has a slope of 1.0."}, {"text": "If a standard score is applied to the ROC curve, the curve will be transformed into a straight line. This z-score is based on a normal distribution with a mean of zero and a standard deviation of one. In memory strength theory, one must assume that the zROC is not only linear, but has a slope of 1.0."}, {"text": "If a standard score is applied to the ROC curve, the curve will be transformed into a straight line. This z-score is based on a normal distribution with a mean of zero and a standard deviation of one. In memory strength theory, one must assume that the zROC is not only linear, but has a slope of 1.0."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}]}, {"question": "What are some examples of bivariate data", "positive_ctxs": [{"text": "Data for two variables (usually two types of related data). Example: Ice cream sales versus the temperature on that day. The two variables are Ice Cream Sales and Temperature."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The sample correlation coefficient r is not an unbiased estimate of \u03c1. For data that follows a bivariate normal distribution, the expectation E[r] for the sample correlation coefficient r of a normal bivariate is"}, {"text": "The sample correlation coefficient r is not an unbiased estimate of \u03c1. For data that follows a bivariate normal distribution, the expectation E[r] for the sample correlation coefficient r of a normal bivariate is"}, {"text": "These non-parametric approaches may give more meaningful results in some situations where bivariate normality does not hold. However the standard versions of these approaches rely on exchangeability of the data, meaning that there is no ordering or grouping of the data pairs being analyzed that might affect the behavior of the correlation estimate."}, {"text": "These non-parametric approaches may give more meaningful results in some situations where bivariate normality does not hold. However the standard versions of these approaches rely on exchangeability of the data, meaning that there is no ordering or grouping of the data pairs being analyzed that might affect the behavior of the correlation estimate."}, {"text": "Consider an ordered population of 10 data values {3, 6, 7, 8, 8, 10, 13, 15, 16, 20}. What are the 4-quantiles (the \"quartiles\") of this dataset?"}, {"text": "Consider an ordered population of 10 data values {3, 6, 7, 8, 8, 10, 13, 15, 16, 20}. What are the 4-quantiles (the \"quartiles\") of this dataset?"}, {"text": "Consider an ordered population of 10 data values {3, 6, 7, 8, 8, 10, 13, 15, 16, 20}. What are the 4-quantiles (the \"quartiles\") of this dataset?"}]}, {"question": "Do you multiply independent events probability", "positive_ctxs": [{"text": "Statement of the Multiplication Rule In order to use the rule, we need to have the probabilities of each of the independent events. Given these events, the multiplication rule states the probability that both events occur is found by multiplying the probabilities of each event."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Lyra visits the Dark Materials Research Laboratory where she meets the chief researcher, Mary Malone, who, has the uncanny ability to see particles of dark matter, if she puts herself in the right mood. She tells Lyra \"you can't see them unless you put your mind in a certain state. Do you know the poet John Keats?"}, {"text": "The concepts of mutually independent events and mutually exclusive events are separate and distinct. The following table contrasts results for the two cases (provided that the probability of the conditioning event is not zero)."}, {"text": "In this case time comes into play and we have a different type of probability depending on time or the number of times the die is thrown. On the other hand, the a priori probability is independent of time - you can look at the die on the table as long as you like without touching it and you deduce the probability for the number 6 to appear on the upper face is 1/6."}, {"text": "Addition, multiplication, and exponentiation are three of the most fundamental arithmetic operations. Addition, the simplest of these, is undone by subtraction: when you add 5 to x to get x + 5, to reverse this operation you need to subtract 5 from x + 5. Multiplication, the next-simplest operation, is undone by division: if you multiply x by 5 to get 5x, you then can divide 5x by 5 to return to the original expression x. Logarithms also undo a fundamental arithmetic operation, exponentiation."}, {"text": "Nature has established patterns originating in the return of events but only for the most part. New illnesses flood the human race, so that no matter how many experiments you have done on corpses, you have not thereby imposed a limit on the nature of events so that in the future they could not vary."}, {"text": "Do you support the unprovoked military action by the USA?will likely result in data skewed in different directions, although they are both polling about the support for the war. A better way of wording the question could be \"Do you support the current US military action abroad?\" A still more nearly neutral way to put that question is \"What is your view about the current US military action abroad?\""}, {"text": "When dealing with collections of more than two events, a weak and a strong notion of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while saying that the events are mutually independent (or collectively independent) intuitively means that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables."}]}, {"question": "What is wrong with stepwise regression", "positive_ctxs": [{"text": "Findings. A fundamental problem with stepwise regression is that some real explanatory variables that have causal effects on the dependent variable may happen to not be statistically significant, while nuisance variables may be coincidentally significant."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "One of the main issues with stepwise regression is that it searches a large space of possible models. Hence it is prone to overfitting the data. In other words, stepwise regression will often fit much better in sample than it does on new out-of-sample data."}, {"text": "One of the main issues with stepwise regression is that it searches a large space of possible models. Hence it is prone to overfitting the data. In other words, stepwise regression will often fit much better in sample than it does on new out-of-sample data."}, {"text": "One of the main issues with stepwise regression is that it searches a large space of possible models. Hence it is prone to overfitting the data. In other words, stepwise regression will often fit much better in sample than it does on new out-of-sample data."}, {"text": "What happens if the person's address as stored in the database is incorrect? Suppose an official accidentally entered the wrong address or date? Or, suppose the person lied about their address for some reason."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Improvements to the LASSO include Bolasso which bootstraps samples; Elastic net regularization, which combines the L1 penalty of LASSO with the L2 penalty of ridge regression; and FeaLect which scores all the features based on combinatorial analysis of regression coefficients. AEFS further extends LASSO to nonlinear scenario with autoencoders. These approaches tend to be between filters and wrappers in terms of computational complexity.In traditional regression analysis, the most popular form of feature selection is stepwise regression, which is a wrapper technique."}, {"text": "Improvements to the LASSO include Bolasso which bootstraps samples; Elastic net regularization, which combines the L1 penalty of LASSO with the L2 penalty of ridge regression; and FeaLect which scores all the features based on combinatorial analysis of regression coefficients. AEFS further extends LASSO to nonlinear scenario with autoencoders. These approaches tend to be between filters and wrappers in terms of computational complexity.In traditional regression analysis, the most popular form of feature selection is stepwise regression, which is a wrapper technique."}]}, {"question": "What are different ensemble learning algorithms", "positive_ctxs": [{"text": "Gradient Boosting or GBM is another ensemble machine learning algorithm that works for both regression and classification problems. GBM uses the boosting technique, combining a number of weak learners to form a strong learner. We will use a simple example to understand the GBM algorithm."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The broader term of multiple classifier systems also covers hybridization of hypotheses that are not induced by the same base learner.Evaluating the prediction of an ensemble typically requires more computation than evaluating the prediction of a single model. In one sense, ensemble learning may be thought of as a way to compensate for poor learning algorithms by performing a lot of extra computation. On the other hand, the alternative is to do a lot more learning on one non-ensemble system."}, {"text": "The broader term of multiple classifier systems also covers hybridization of hypotheses that are not induced by the same base learner.Evaluating the prediction of an ensemble typically requires more computation than evaluating the prediction of a single model. In one sense, ensemble learning may be thought of as a way to compensate for poor learning algorithms by performing a lot of extra computation. On the other hand, the alternative is to do a lot more learning on one non-ensemble system."}, {"text": "The broader term of multiple classifier systems also covers hybridization of hypotheses that are not induced by the same base learner.Evaluating the prediction of an ensemble typically requires more computation than evaluating the prediction of a single model. In one sense, ensemble learning may be thought of as a way to compensate for poor learning algorithms by performing a lot of extra computation. On the other hand, the alternative is to do a lot more learning on one non-ensemble system."}, {"text": "An ensemble system may be more efficient at improving overall accuracy for the same increase in compute, storage, or communication resources by using that increase on two or more methods, than would have been improved by increasing resource use for a single method. Fast algorithms such as decision trees are commonly used in ensemble methods (for example, random forests), although slower algorithms can benefit from ensemble techniques as well."}, {"text": "An ensemble system may be more efficient at improving overall accuracy for the same increase in compute, storage, or communication resources by using that increase on two or more methods, than would have been improved by increasing resource use for a single method. Fast algorithms such as decision trees are commonly used in ensemble methods (for example, random forests), although slower algorithms can benefit from ensemble techniques as well."}, {"text": "An ensemble system may be more efficient at improving overall accuracy for the same increase in compute, storage, or communication resources by using that increase on two or more methods, than would have been improved by increasing resource use for a single method. Fast algorithms such as decision trees are commonly used in ensemble methods (for example, random forests), although slower algorithms can benefit from ensemble techniques as well."}, {"text": "Although perhaps non-intuitive, more random algorithms (like random decision trees) can be used to produce a stronger ensemble than very deliberate algorithms (like entropy-reducing decision trees). Using a variety of strong learning algorithms, however, has been shown to be more effective than using techniques that attempt to dumb-down the models in order to promote diversity."}]}, {"question": "How do you calculate average precision score", "positive_ctxs": [{"text": "The mean Average Precision or mAP score is calculated by taking the mean AP over all classes and/or overall IoU thresholds, depending on different detection challenges that exist. In PASCAL VOC2007 challenge, AP for one object class is calculated for an IoU threshold of 0.5."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "is a relevant document, zero otherwise. Note that the average is over all relevant documents and the relevant documents not retrieved get a precision score of zero."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "The Internet Movie Database uses a formula for calculating and comparing the ratings of films by its users, including their Top Rated 250 Titles which is claimed to give \"a true Bayesian estimate\". The following Bayesian formula was initially used to calculate a weighted average score for the Top 250, though the formula has since changed:"}, {"text": "In the student test example above, it was assumed implicitly that what was being measured did not change between the two measurements. Suppose, however, that the course was pass/fail and students were required to score above 70 on both tests to pass. Then the students who scored under 70 the first time would have no incentive to do well, and might score worse on average the second time."}]}, {"question": "What is interpolation in machine learning", "positive_ctxs": [{"text": "Interpolation is making an educated guess with the information within a certain data set. It is a \u201cbest guess\u201d using the information you have at hand."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Linear interpolation as described here is for data points in one spatial dimension. For two spatial dimensions, the extension of linear interpolation is called bilinear interpolation, and in three dimensions, trilinear interpolation. Notice, though, that these interpolants are no longer linear functions of the spatial coordinates, rather products of linear functions; this is illustrated by the clearly non-linear example of bilinear interpolation in the figure below."}, {"text": "Other forms of interpolation can be constructed by picking a different class of interpolants. For instance, rational interpolation is interpolation by rational functions using Pad\u00e9 approximant, and trigonometric interpolation is interpolation by trigonometric polynomials using Fourier series. Another possibility is to use wavelets."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Linear interpolation has been used since antiquity for filling the gaps in tables. Suppose that one has a table listing the population of some country in 1970, 1980, 1990 and 2000, and that one wanted to estimate the population in 1994. Linear interpolation is an easy way to do this."}, {"text": "The simplest interpolation method is to locate the nearest data value, and assign the same value. In simple problems, this method is unlikely to be used, as linear interpolation (see below) is almost as easy, but in higher-dimensional multivariate interpolation, this could be a favourable choice for its speed and simplicity."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}]}, {"question": "What is the difference between self supervised and unsupervised learning", "positive_ctxs": [{"text": "Unsupervised learning uses the entire dataset for the supervised training process. In contrast, in self-supervised learning, you withhold part of the data in some form, and you try to predict the rest. In contrast, in self-supervised learning, you withhold part of the data in some form, and you try to predict the rest."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The goals of learning are understanding and prediction. Learning falls into many categories, including supervised learning, unsupervised learning, online learning, and reinforcement learning. From the perspective of statistical learning theory, supervised learning is best understood."}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "Deep learning is being successfully applied to financial fraud detection and anti-money laundering. \"Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events\". The solution leverages both supervised learning techniques, such as the classification of suspicious transactions, and unsupervised learning, e.g."}, {"text": "Deep learning is being successfully applied to financial fraud detection and anti-money laundering. \"Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events\". The solution leverages both supervised learning techniques, such as the classification of suspicious transactions, and unsupervised learning, e.g."}, {"text": "Deep learning is being successfully applied to financial fraud detection and anti-money laundering. \"Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events\". The solution leverages both supervised learning techniques, such as the classification of suspicious transactions, and unsupervised learning, e.g."}]}, {"question": "What makes a matrix symmetric", "positive_ctxs": [{"text": "A matrix A is symmetric if it is equal to its transpose, i.e., A=AT. A matrix A is symmetric if and only if swapping indices doesn't change its components, i.e., aij=aji."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "where Z is a normalization constant, A is a symmetric positive definite matrix (inverse covariance matrix a.k.a. precision matrix) and b is the shift vector."}, {"text": "An eigenvalue \u03bb of a matrix M is characterized by the algebraic relation Mu = \u03bbu. When M is Hermitian, a variational characterization is also available. Let M be a real n \u00d7 n symmetric matrix."}, {"text": "An eigenvalue \u03bb of a matrix M is characterized by the algebraic relation Mu = \u03bbu. When M is Hermitian, a variational characterization is also available. Let M be a real n \u00d7 n symmetric matrix."}, {"text": "The eigendecomposition of a symmetric positive semidefinite (PSD) matrix yields an orthogonal basis of eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used in multivariate analysis, where the sample covariance matrices are PSD. This orthogonal decomposition is called principal component analysis (PCA) in statistics."}, {"text": "Exact solutions for the variants of NMF can be expected (in polynomial time) when additional constraints hold for matrix V. A polynomial time algorithm for solving nonnegative rank factorization if V contains a monomial sub matrix of rank equal to its rank was given by Campbell and Poole in 1981. Kalofolias and Gallopoulos (2012) solved the symmetric counterpart of this problem, where V is symmetric and contains a diagonal principal sub matrix of rank r. Their algorithm runs in O(rm2) time in the dense case. Arora, Ge, Halpern, Mimno, Moitra, Sontag, Wu, & Zhu (2013) give a polynomial time algorithm for exact NMF that works for the case where one of the factors W satisfies a separability condition."}, {"text": "Exact solutions for the variants of NMF can be expected (in polynomial time) when additional constraints hold for matrix V. A polynomial time algorithm for solving nonnegative rank factorization if V contains a monomial sub matrix of rank equal to its rank was given by Campbell and Poole in 1981. Kalofolias and Gallopoulos (2012) solved the symmetric counterpart of this problem, where V is symmetric and contains a diagonal principal sub matrix of rank r. Their algorithm runs in O(rm2) time in the dense case. Arora, Ge, Halpern, Mimno, Moitra, Sontag, Wu, & Zhu (2013) give a polynomial time algorithm for exact NMF that works for the case where one of the factors W satisfies a separability condition."}, {"text": "There is an alternative way that does not explicitly use the eigenvalue decomposition. Usually the singular value problem of a matrix M is converted into an equivalent symmetric eigenvalue problem such as M M*, M*M, or"}]}, {"question": "How is root mean square calculated", "positive_ctxs": [{"text": "A kind of average sometimes used in statistics and engineering, often abbreviated as RMS. To find the root mean square of a set of numbers, square all the numbers in the set and then find the arithmetic mean of the squares. Take the square root of the result. This is the root mean square."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "In the physics of gas molecules, the root-mean-square speed is defined as the square root of the average squared-speed. The RMS speed of an ideal gas is calculated using the following equation:"}, {"text": "In the physics of gas molecules, the root-mean-square speed is defined as the square root of the average squared-speed. The RMS speed of an ideal gas is calculated using the following equation:"}]}, {"question": "What is probit regression used for", "positive_ctxs": [{"text": "Probit regression, also called a probit model, is used to model dichotomous or binary outcome variables. In the probit model, the inverse standard normal distribution of the probability is modeled as a linear combination of the predictors."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In such a situation, ordinary least squares (the basic regression technique) is widely seen as inadequate; instead probit regression or logistic regression is used. Further, sometimes there are three or more categories for the dependent variable \u2014 for example, no charges, charges, and death sentences. In this case, the multinomial probit or multinomial logit technique is used."}, {"text": "A probit model is a popular specification for a binary response model. As such it treats the same set of problems as does logistic regression using similar techniques. When viewed in the generalized linear model framework, the probit model employs a probit link function."}, {"text": "A probit model is a popular specification for a binary response model. As such it treats the same set of problems as does logistic regression using similar techniques. When viewed in the generalized linear model framework, the probit model employs a probit link function."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "and then running the linear regression on these transformed values. In 1934 Chester Ittner Bliss used the cumulative normal distribution function to perform this mapping and called his model probit an abbreviation for \"probability unit\";. However, this is computationally more expensive."}, {"text": "and then running the linear regression on these transformed values. In 1934 Chester Ittner Bliss used the cumulative normal distribution function to perform this mapping and called his model probit an abbreviation for \"probability unit\";. However, this is computationally more expensive."}, {"text": "and then running the linear regression on these transformed values. In 1934 Chester Ittner Bliss used the cumulative normal distribution function to perform this mapping and called his model probit an abbreviation for \"probability unit\";. However, this is computationally more expensive."}]}, {"question": "What is input layer in CNN", "positive_ctxs": [{"text": "4.1 Input Layer Input layer in CNN should contain image data. Image data is represented by three dimensional matrix as we saw earlier. You need to reshape it into a single column. If you have \u201cm\u201d training examples then dimension of input will be (784, m)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}]}, {"question": "Can neural network handle categorical data", "positive_ctxs": [{"text": "Because neural networks work internally with numeric data, binary data (such as sex, which can be male or female) and categorical data (such as a community, which can be suburban, city or rural) must be encoded in numeric form."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Able to handle both numerical and categorical data. Other techniques are usually specialized in analyzing datasets that have only one type of variable. (For example, relation rules can be used only with nominal variables while neural networks can be used only with numerical variables or categoricals converted to 0-1 values.)"}, {"text": "Able to handle both numerical and categorical data. Other techniques are usually specialized in analyzing datasets that have only one type of variable. (For example, relation rules can be used only with nominal variables while neural networks can be used only with numerical variables or categoricals converted to 0-1 values.)"}, {"text": "Regularized trees naturally handle numerical and categorical features, interactions and nonlinearities. They are invariant to attribute scales (units) and insensitive to outliers, and thus, require little data preprocessing such as normalization. Regularized random forest (RRF) is one type of regularized trees."}, {"text": "Regularized trees naturally handle numerical and categorical features, interactions and nonlinearities. They are invariant to attribute scales (units) and insensitive to outliers, and thus, require little data preprocessing such as normalization. Regularized random forest (RRF) is one type of regularized trees."}, {"text": "By assigning a softmax activation function, a generalization of the logistic function, on the output layer of the neural network (or a softmax component in a component-based network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is useful in classification as it gives a certainty measure on classifications."}, {"text": "By assigning a softmax activation function, a generalization of the logistic function, on the output layer of the neural network (or a softmax component in a component-based network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is useful in classification as it gives a certainty measure on classifications."}, {"text": "Categorical data is the statistical data type consisting of categorical variables or of data that has been converted into that form, for example as grouped data. More specifically, categorical data may derive from observations made of qualitative data that are summarised as counts or cross tabulations, or from observations of quantitative data grouped within given intervals. Often, purely categorical data are summarised in the form of a contingency table."}]}, {"question": "What is the variance of the estimator", "positive_ctxs": [{"text": "Variance of estimator: Variance is one of the most popularly used measures of spread. It is taken into consideration for quantification of the amount of dispersion with respect to set of data values. Variance is defined as the average of the squared deviation of each observation from its mean."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Experimental designs are evaluated using statistical criteria.It is known that the least squares estimator minimizes the variance of mean-unbiased estimators (under the conditions of the Gauss\u2013Markov theorem). In the estimation theory for statistical models with one real parameter, the reciprocal of the variance of an (\"efficient\") estimator is called the \"Fisher information\" for that estimator. Because of this reciprocity, minimizing the variance corresponds to maximizing the information."}, {"text": "for all values of the parameter, then the estimator is called efficient.Equivalently, the estimator achieves equality in the Cram\u00e9r\u2013Rao inequality for all \u03b8. The Cram\u00e9r\u2013Rao lower bound is a lower bound of the variance of an unbiased estimator, representing the \"best\" an unbiased estimator can be."}, {"text": "for all values of the parameter, then the estimator is called efficient.Equivalently, the estimator achieves equality in the Cram\u00e9r\u2013Rao inequality for all \u03b8. The Cram\u00e9r\u2013Rao lower bound is a lower bound of the variance of an unbiased estimator, representing the \"best\" an unbiased estimator can be."}, {"text": "for all values of the parameter, then the estimator is called efficient.Equivalently, the estimator achieves equality in the Cram\u00e9r\u2013Rao inequality for all \u03b8. The Cram\u00e9r\u2013Rao lower bound is a lower bound of the variance of an unbiased estimator, representing the \"best\" an unbiased estimator can be."}, {"text": "When the statistical model has several parameters, however, the mean of the parameter-estimator is a vector and its variance is a matrix. The inverse matrix of the variance-matrix is called the \"information matrix\". Because the variance of the estimator of a parameter vector is a matrix, the problem of \"minimizing the variance\" is complicated."}, {"text": "This estimator has mean \u03b8 and variance of \u03c32 / n, which is equal to the reciprocal of the Fisher information from the sample. Thus, the sample mean is a finite-sample efficient estimator for the mean of the normal distribution."}, {"text": "This estimator has mean \u03b8 and variance of \u03c32 / n, which is equal to the reciprocal of the Fisher information from the sample. Thus, the sample mean is a finite-sample efficient estimator for the mean of the normal distribution."}]}, {"question": "How does connected component image labeling work on colored images", "positive_ctxs": [{"text": "How It Works. Connected component labeling works by scanning an image, pixel-by-pixel (from top to bottom and left to right) in order to identify connected pixel regions, i.e. regions of adjacent pixels which share the same set of intensity values V."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Connected-component labeling is used in computer vision to detect connected regions in binary digital images, although color images and data with higher dimensionality can also be processed. When integrated into an image recognition system or human-computer interaction interface, connected component labeling can operate on a variety of information. Blob extraction is generally performed on the resulting binary image from a thresholding step, but it can be applicable to gray-scale and color images as well."}, {"text": "define connected components labeling as the \u201c[c]reation of a labeled image in which the positions associated with the same connected component of the binary input image have a unique label.\u201d Shapiro et al. define CCL as an operator whose \u201cinput is a binary image and [...] output is a symbolic image in which the label assigned to each pixel is an integer uniquely identifying the connected component to which that pixel belongs.\u201dThere is no consensus on the definition of CCA in the academic literature. It is often used interchangeably with CCL."}, {"text": "Homographies between pairs of images are then computed using RANSAC and a probabilistic model is used for verification. Because there is no restriction on the input images, graph search is applied to find connected components of image matches such that each connected component will correspond to a panorama. Finally for each connected component bundle adjustment is performed to solve for joint camera parameters, and the panorama is rendered using multi-band blending."}, {"text": "The emergence of FPGAs with enough capacity to perform complex image processing tasks also led to high-performance architectures for connected-component labeling. Most of these architectures utilize the single pass variant of this algorithm, because of the limited memory resources available on an FPGA. These types of connected component labeling architectures are able to process several image pixels in parallel, thereby enabling a high throughput at low processing latency to be achieved."}, {"text": "A more extensive definition is given by Shapiro et al. : \u201cConnected component analysis consists of connected component labeling of the black pixels followed by property measurement of the component regions and decision making.\u201d The definition for connected-component analysis presented here is more general, taking the thoughts expressed in into account."}, {"text": "Connected-component labeling (CCL), connected-component analysis (CCA), blob extraction, region labeling, blob discovery, or region extraction is an algorithmic application of graph theory, where subsets of connected components are uniquely labeled based on a given heuristic. Connected-component labeling is not to be confused with segmentation."}, {"text": "Co-training can only work if one of the classifiers correctly labels a piece of data that the other classifier previously misclassified. If both classifiers agree on all the unlabeled data, i.e. they are not independent, labeling the data does not create new information."}]}, {"question": "How do you find the variance of a sum of squares", "positive_ctxs": [{"text": "The variance is the average of the sum of squares (i.e., the sum of squares divided by the number of observations). The standard deviation is the square root of the variance."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In linear regression analysis, one is concerned with partitioning variance via the sum of squares calculations \u2013 variance in the criterion is essentially divided into variance accounted for by the predictors and residual variance. In logistic regression analysis, deviance is used in lieu of a sum of squares calculations. Deviance is analogous to the sum of squares calculations in linear regression and is a measure of the lack of fit to the data in a logistic regression model."}, {"text": "In linear regression analysis, one is concerned with partitioning variance via the sum of squares calculations \u2013 variance in the criterion is essentially divided into variance accounted for by the predictors and residual variance. In logistic regression analysis, deviance is used in lieu of a sum of squares calculations. Deviance is analogous to the sum of squares calculations in linear regression and is a measure of the lack of fit to the data in a logistic regression model."}, {"text": "In linear regression analysis, one is concerned with partitioning variance via the sum of squares calculations \u2013 variance in the criterion is essentially divided into variance accounted for by the predictors and residual variance. In logistic regression analysis, deviance is used in lieu of a sum of squares calculations. Deviance is analogous to the sum of squares calculations in linear regression and is a measure of the lack of fit to the data in a logistic regression model."}, {"text": "where R2 is the coefficient of determination and VARerr and VARtot are the variance of the residuals and the sample variance of the dependent variable. SSerr (the sum of squared predictions errors, equivalently the residual sum of squares), SStot (the total sum of squares), and SSreg (the sum of squares of the regression, equivalently the explained sum of squares) are given by"}, {"text": "The square root of s2 is called the regression standard error, standard error of the regression, or standard error of the equation.It is common to assess the goodness-of-fit of the OLS regression by comparing how much the initial variation in the sample can be reduced by regressing onto X. The coefficient of determination R2 is defined as a ratio of \"explained\" variance to the \"total\" variance of the dependent variable y, in the cases where the regression sum of squares equals the sum of squares of residuals:"}, {"text": "The square root of s2 is called the regression standard error, standard error of the regression, or standard error of the equation.It is common to assess the goodness-of-fit of the OLS regression by comparing how much the initial variation in the sample can be reduced by regressing onto X. The coefficient of determination R2 is defined as a ratio of \"explained\" variance to the \"total\" variance of the dependent variable y, in the cases where the regression sum of squares equals the sum of squares of residuals:"}, {"text": "In general, total sum of squares = explained sum of squares + residual sum of squares. For a proof of this in the multivariate ordinary least squares (OLS) case, see partitioning in the general OLS model."}]}, {"question": "How are predictive analytics commonly used", "positive_ctxs": [{"text": "Predictive analytics are used to determine customer responses or purchases, as well as promote cross-sell opportunities. Predictive models help businesses attract, retain and grow their most profitable customers. Improving operations. Many companies use predictive models to forecast inventory and manage resources."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Much of the software that is currently used for learning analytics duplicates functionality of web analytics software, but applies it to learner interactions with content. Social network analysis tools are commonly used to map social connections and discussions. Some examples of learning analytics software tools include:"}, {"text": "k-nearest neighbor search identifies the top k nearest neighbors to the query. This technique is commonly used in predictive analytics to estimate or classify a point based on the consensus of its neighbors. k-nearest neighbor graphs are graphs in which every point is connected to its k nearest neighbors."}, {"text": "The emergence of Big Data in the late 2000s led to a heightened interest in the applications of unstructured data analytics in contemporary fields such as predictive analytics and root cause analysis."}, {"text": "Differentiating the fields of educational data mining (EDM) and learning analytics (LA) has been a concern of several researchers. George Siemens takes the position that educational data mining encompasses both learning analytics and academic analytics, the former of which is aimed at governments, funding agencies, and administrators instead of learners and faculty. Baepler and Murdoch define academic analytics as an area that \"...combines select institutional data, statistical analysis, and predictive modeling to create intelligence upon which learners, instructors, or administrators can change academic behavior\"."}, {"text": "Depending on definitional boundaries, predictive modelling is synonymous with, or largely overlapping with, the field of machine learning, as it is more commonly referred to in academic or research and development contexts. When deployed commercially, predictive modelling is often referred to as predictive analytics."}, {"text": "Depending on definitional boundaries, predictive modelling is synonymous with, or largely overlapping with, the field of machine learning, as it is more commonly referred to in academic or research and development contexts. When deployed commercially, predictive modelling is often referred to as predictive analytics."}, {"text": "For manual material handling workers, predictive analytics and artificial intelligence may be used to reduce musculoskeletal injury. Wearable sensors may also enable earlier intervention against exposure to toxic substances, and the large data sets generated could improve workplace health surveillance, risk assessment, and research.AI can also be used to make the workplace safety and health workflow more efficient. One example is coding of workers' compensation claims."}]}, {"question": "What are the domains of machine learning", "positive_ctxs": [{"text": "Machine learning is perhaps the principal technology behind two emerging domains: data science and artificial intelligence. The rise of machine learning is coming about through the availability of data and computation, but machine learning methdologies are fundamentally dependent on models."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A subset of machine learning is closely related to computational statistics, which focuses on making predictions using computers; but not all machine learning is statistical learning. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning."}, {"text": "A subset of machine learning is closely related to computational statistics, which focuses on making predictions using computers; but not all machine learning is statistical learning. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning."}, {"text": "A subset of machine learning is closely related to computational statistics, which focuses on making predictions using computers; but not all machine learning is statistical learning. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning."}, {"text": "A subset of machine learning is closely related to computational statistics, which focuses on making predictions using computers; but not all machine learning is statistical learning. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning."}, {"text": "A subset of machine learning is closely related to computational statistics, which focuses on making predictions using computers; but not all machine learning is statistical learning. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning."}, {"text": "A subset of machine learning is closely related to computational statistics, which focuses on making predictions using computers; but not all machine learning is statistical learning. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning."}, {"text": "A subset of machine learning is closely related to computational statistics, which focuses on making predictions using computers; but not all machine learning is statistical learning. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning."}]}, {"question": "How do you determine the size of a hidden layer", "positive_ctxs": [{"text": "The number of hidden neurons should be between the size of the input layer and the size of the output layer. The number of hidden neurons should be 2/3 the size of the input layer, plus the size of the output layer. The number of hidden neurons should be less than twice the size of the input layer."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If linear activations are used, or only a single sigmoid hidden layer, then the optimal solution to an autoencoder is strongly related to principal component analysis (PCA). The weights of an autoencoder with a single hidden layer of size"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Minsky and Papert used perceptrons with restricted number of inputs of the hidden layer A-elements and locality condition: each element of the hidden layer receives the input signals from a small circle. These restricted perceptrons cannot define whether the image is a connected figure or is the number of pixels in the image even (the parity predicate)."}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}]}, {"question": "What is the difference between blocking and stratification", "positive_ctxs": [{"text": "Blocking refers to classifying experimental units into blocks whereas stratification refers to classifying individuals of a population into strata. The samples from the strata in a stratified random sample can be the blocks in an experiment."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In psychophysical terms, the size difference between A and C is above the just noticeable difference ('jnd') while the size differences between A and B and B and C are below the jnd."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration."}, {"text": "the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration."}, {"text": "There is a simple difference formula to compute the rank-biserial correlation from the common language effect size: the correlation is the difference between the proportion of pairs favorable to the hypothesis (f) minus its complement (i.e. : the proportion that is unfavorable (u)). This simple difference formula is just the difference of the common language effect size of each group, and is as follows:"}, {"text": "There is a simple difference formula to compute the rank-biserial correlation from the common language effect size: the correlation is the difference between the proportion of pairs favorable to the hypothesis (f) minus its complement (i.e. : the proportion that is unfavorable (u)). This simple difference formula is just the difference of the common language effect size of each group, and is as follows:"}]}, {"question": "Is maximum likelihood estimator biased", "positive_ctxs": [{"text": "It is well known that maximum likelihood estimators are often biased, and it is of use to estimate the expected bias so that we can reduce the mean square errors of our parameter estimates. In both problems, the first-order bias is found to be linear in the parameter and the sample size."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Another estimator which is asymptotically normal and efficient is the maximum likelihood estimator (MLE). The relations between the maximum likelihood and Bayes estimators can be shown in the following simple example."}, {"text": "but this is a biased estimate. Its inverse (r + k)/r, is an unbiased estimate of 1/p, however.The maximum likelihood estimator only exists for samples for which the sample variance is larger than the sample mean. The likelihood function for N iid observations (k1, ..., kN) is"}, {"text": "but this is a biased estimate. Its inverse (r + k)/r, is an unbiased estimate of 1/p, however.The maximum likelihood estimator only exists for samples for which the sample variance is larger than the sample mean. The likelihood function for N iid observations (k1, ..., kN) is"}, {"text": "A maximum likelihood estimator coincides with the most probable Bayesian estimator given a uniform prior distribution on the parameters. Indeed, the maximum a posteriori estimate is the parameter \u03b8 that maximizes the probability of \u03b8 given the data, given by Bayes' theorem:"}, {"text": "A maximum likelihood estimator coincides with the most probable Bayesian estimator given a uniform prior distribution on the parameters. Indeed, the maximum a posteriori estimate is the parameter \u03b8 that maximizes the probability of \u03b8 given the data, given by Bayes' theorem:"}, {"text": "A maximum likelihood estimator coincides with the most probable Bayesian estimator given a uniform prior distribution on the parameters. Indeed, the maximum a posteriori estimate is the parameter \u03b8 that maximizes the probability of \u03b8 given the data, given by Bayes' theorem:"}, {"text": "This is the sample maximum, scaled to correct for the bias, and is MVUE by the Lehmann\u2013Scheff\u00e9 theorem. Unscaled sample maximum T(X) is the maximum likelihood estimator for \u03b8."}]}, {"question": "What is a good way to understand tensors", "positive_ctxs": [{"text": "Rather than trying to define a number, instead define what a field of numbers is; instead of defining what a vector is, consider instead all the vectors that make up a vector space. So to understand tensors of a particular type, instead consider all those tensors of the same type together."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The tensor algebra T(V) is a formal way of adding products to any vector space V to obtain an algebra. As a vector space, it is spanned by symbols, called simple tensors"}, {"text": "Simple to understand and interpret. People are able to understand decision tree models after a brief explanation. Trees can also be displayed graphically in a way that is easy for non-experts to interpret."}, {"text": "Simple to understand and interpret. People are able to understand decision tree models after a brief explanation. Trees can also be displayed graphically in a way that is easy for non-experts to interpret."}, {"text": "One way to understand the motivation of the above definition is to consider the optimal transport problem. That is, for a distribution of mass"}, {"text": "Autoconstructive evolution is a good platform for answering theoretical questions about the evolution of evolvability. Preliminary evidence suggests that the way in which offspring are generated changes substantially over the course of evolution. By studying these patterns, we can begin to understand how evolving systems organize themselves to evolve faster."}, {"text": "is a tensor representation of the general linear group, this gives the usual definition of tensors as multidimensional arrays. This definition is often used to describe tensors on manifolds, and readily generalizes to other groups."}, {"text": "is a tensor representation of the general linear group, this gives the usual definition of tensors as multidimensional arrays. This definition is often used to describe tensors on manifolds, and readily generalizes to other groups."}]}, {"question": "How Bayes theorem is applied in machine learning", "positive_ctxs": [{"text": "Bayes Theorem for Modeling Hypotheses. Bayes Theorem is a useful tool in applied machine learning. It provides a way of thinking about the relationship between data and a model. A machine learning algorithm or model is a specific way of thinking about the structured relationships in the data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In the statistics and computer science literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but na\u00efve Bayes is not (necessarily) a Bayesian method."}, {"text": "In the statistics and computer science literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but na\u00efve Bayes is not (necessarily) a Bayesian method."}, {"text": "In the statistics and computer science literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but na\u00efve Bayes is not (necessarily) a Bayesian method."}, {"text": "In the statistics and computer science literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but na\u00efve Bayes is not (necessarily) a Bayesian method."}, {"text": "This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems."}, {"text": "This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems."}, {"text": "This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems."}]}, {"question": "What is NLTK WordNet", "positive_ctxs": [{"text": "The WordNet is a part of Python's Natural Language Toolkit. It is a large word database of English Nouns, Adjectives, Adverbs and Verbs. These are grouped into some set of cognitive synonyms, which are called synsets. In the wordnet, there are some groups of words, whose meaning are same."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "For calculating soft cosine, the matrix s is used to indicate similarity between features. It can be calculated through Levenshtein distance, WordNet similarity, or other similarity measures. Then we just multiply by this matrix."}, {"text": "For calculating soft cosine, the matrix s is used to indicate similarity between features. It can be calculated through Levenshtein distance, WordNet similarity, or other similarity measures. Then we just multiply by this matrix."}, {"text": "ImageNet uses a variant of the broad WordNet schema to categorize objects, augmented with 120 categories of dog breeds to showcase fine-grained classification. One downside of WordNet use is the categories may be more \"elevated\" than would be optimal for ImageNet: \"Most people are more interested in Lady Gaga or the iPod Mini than in this rare kind of diplodocus.\" In 2012 ImageNet was the world's largest academic user of Mechanical Turk."}]}, {"question": "What are the differences between supervised and unsupervised classification", "positive_ctxs": [{"text": "In a supervised learning model, the algorithm learns on a labeled dataset, providing an answer key that the algorithm can use to evaluate its accuracy on training data. An unsupervised model, in contrast, provides unlabeled data that the algorithm tries to make sense of by extracting features and patterns on its own."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Deep learning is being successfully applied to financial fraud detection and anti-money laundering. \"Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events\". The solution leverages both supervised learning techniques, such as the classification of suspicious transactions, and unsupervised learning, e.g."}, {"text": "Deep learning is being successfully applied to financial fraud detection and anti-money laundering. \"Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events\". The solution leverages both supervised learning techniques, such as the classification of suspicious transactions, and unsupervised learning, e.g."}, {"text": "Deep learning is being successfully applied to financial fraud detection and anti-money laundering. \"Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events\". The solution leverages both supervised learning techniques, such as the classification of suspicious transactions, and unsupervised learning, e.g."}, {"text": "Deep learning is being successfully applied to financial fraud detection and anti-money laundering. \"Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events\". The solution leverages both supervised learning techniques, such as the classification of suspicious transactions, and unsupervised learning, e.g."}, {"text": "Deep learning is being successfully applied to financial fraud detection and anti-money laundering. \"Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events\". The solution leverages both supervised learning techniques, such as the classification of suspicious transactions, and unsupervised learning, e.g."}, {"text": "Deep learning is being successfully applied to financial fraud detection and anti-money laundering. \"Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events\". The solution leverages both supervised learning techniques, such as the classification of suspicious transactions, and unsupervised learning, e.g."}, {"text": "Deep learning is being successfully applied to financial fraud detection and anti-money laundering. \"Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events\". The solution leverages both supervised learning techniques, such as the classification of suspicious transactions, and unsupervised learning, e.g."}]}, {"question": "What is the formula for hypergeometric distribution", "positive_ctxs": [{"text": "Hypergeometric Formula.. The hypergeometric distribution has the following properties: The mean of the distribution is equal to n * k / N . The variance is n * k * ( N - k ) * ( N - n ) / [ N2 * ( N - 1 ) ] ."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "for 0 \u2264 y \u2264 min ( 3, x ). It is the hypergeometric distribution H ( x; 3, 7 ), or equivalently, H ( 3; x, 10-x ). The corresponding expectation 0.3 x, obtained from the general formula"}, {"text": "The model of an urn with green and red marbles can be extended to the case where there are more than two colors of marbles. If there are Ki marbles of color i in the urn and you take n marbles at random without replacement, then the number of marbles of each color in the sample (k1, k2,..., kc) has the multivariate hypergeometric distribution. This has the same relationship to the multinomial distribution that the hypergeometric distribution has to the binomial distribution\u2014the multinomial distribution is the \"with-replacement\" distribution and the multivariate hypergeometric is the \"without-replacement\" distribution."}, {"text": "The characteristic function is the Fourier transform of the probability density function. The characteristic function of the beta distribution is Kummer's confluent hypergeometric function (of the first kind):"}, {"text": "The test based on the hypergeometric distribution (hypergeometric test) is identical to the corresponding one-tailed version of Fisher's exact test. Reciprocally, the p-value of a two-sided Fisher's exact test can be calculated as the sum of two appropriate hypergeometric tests (for more information see)."}, {"text": "The formula above gives the exact hypergeometric probability of observing this particular arrangement of the data, assuming the given marginal totals, on the null hypothesis that men and women are equally likely to be studiers. To put it another way, if we assume that the probability that a man is a studier is"}, {"text": "The formula above gives the exact hypergeometric probability of observing this particular arrangement of the data, assuming the given marginal totals, on the null hypothesis that men and women are equally likely to be studiers. To put it another way, if we assume that the probability that a man is a studier is"}, {"text": "Note: Since we\u2019re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample. Technically speaking this is sampling without replacement, so the correct distribution is the multivariate hypergeometric distribution, but the distributions converge as the population grows large."}]}, {"question": "Why is test data set used", "positive_ctxs": [{"text": "Finally, the test dataset is a dataset used to provide an unbiased evaluation of a final model fit on the training dataset. If the data in the test dataset has never been used in training (for example in cross-validation), the test dataset is also called a holdout dataset."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The modified Thompson Tau test is used to find one outlier at a time (largest value of \u03b4 is removed if it is an outlier). Meaning, if a data point is found to be an outlier, it is removed from the data set and the test is applied again with a new average and rejection region. This process is continued until no outliers remain in a data set."}, {"text": "This is a type of k*l-fold cross-validation when l = k - 1. A single k-fold cross-validation is used with both a validation and test set. The total data set is split into k sets."}, {"text": "This is a type of k*l-fold cross-validation when l = k - 1. A single k-fold cross-validation is used with both a validation and test set. The total data set is split into k sets."}, {"text": "This is a type of k*l-fold cross-validation when l = k - 1. A single k-fold cross-validation is used with both a validation and test set. The total data set is split into k sets."}, {"text": "A test set is therefore a set of examples used only to assess the performance (i.e. generalization) of a fully specified classifier. To do this, the final model is used to predict classifications of examples in the test set."}, {"text": "A test set is therefore a set of examples used only to assess the performance (i.e. generalization) of a fully specified classifier. To do this, the final model is used to predict classifications of examples in the test set."}, {"text": "A test set is therefore a set of examples used only to assess the performance (i.e. generalization) of a fully specified classifier. To do this, the final model is used to predict classifications of examples in the test set."}]}, {"question": "What is a probability distribution example", "positive_ctxs": [{"text": "The probability distribution of a discrete random variable can always be represented by a table. For example, suppose you flip a coin two times. The probability of getting 0 heads is 0.25; 1 head, 0.50; and 2 heads, 0.25. Thus, the table is an example of a probability distribution for a discrete random variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A probability distribution can be viewed as a partition of a set. One may then ask: if a set were partitioned randomly, what would the distribution of probabilities be? What would the expectation value of the mutual information be?"}, {"text": "A probability distribution whose sample space is one-dimensional (for example real numbers, list of labels, ordered labels or binary) is called univariate, while a distribution whose sample space is a vector space of dimension 2 or more is called multivariate. A univariate distribution gives the probabilities of a single random variable taking on various alternative values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector \u2013 a list of two or more random variables \u2013 taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution."}, {"text": "A probability distribution whose sample space is one-dimensional (for example real numbers, list of labels, ordered labels or binary) is called univariate, while a distribution whose sample space is a vector space of dimension 2 or more is called multivariate. A univariate distribution gives the probabilities of a single random variable taking on various alternative values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector \u2013 a list of two or more random variables \u2013 taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution."}, {"text": "A probability distribution whose sample space is one-dimensional (for example real numbers, list of labels, ordered labels or binary) is called univariate, while a distribution whose sample space is a vector space of dimension 2 or more is called multivariate. A univariate distribution gives the probabilities of a single random variable taking on various alternative values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector \u2013 a list of two or more random variables \u2013 taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution."}, {"text": "A probability distribution whose sample space is one-dimensional (for example real numbers, list of labels, ordered labels or binary) is called univariate, while a distribution whose sample space is a vector space of dimension 2 or more is called multivariate. A univariate distribution gives the probabilities of a single random variable taking on various alternative values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vector \u2013 a list of two or more random variables \u2013 taking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution."}, {"text": "It is often necessary to generalize the above definition for more arbitrary subsets of the real line. In these contexts, a continuous probability distribution is defined as a probability distribution with a cumulative distribution function that is absolutely continuous. Equivalently, it is a probability distribution on the real numbers that is absolutely continuous with respect to the Lebesgue measure."}, {"text": "It is often necessary to generalize the above definition for more arbitrary subsets of the real line. In these contexts, a continuous probability distribution is defined as a probability distribution with a cumulative distribution function that is absolutely continuous. Equivalently, it is a probability distribution on the real numbers that is absolutely continuous with respect to the Lebesgue measure."}]}, {"question": "Is mean a biased estimator", "positive_ctxs": [{"text": "A statistic is biased if the long-term average value of the statistic is not the parameter it is estimating. More formally, a statistic is biased if the mean of the sampling distribution of the statistic is not equal to the parameter. Therefore the sample mean is an unbiased estimate of \u03bc."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as in unbiased estimation of standard deviation); because an estimator is median-unbiased but not mean-unbiased (or the reverse); because a biased estimator gives a lower value of some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful."}, {"text": "All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as in unbiased estimation of standard deviation); because an estimator is median-unbiased but not mean-unbiased (or the reverse); because a biased estimator gives a lower value of some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful."}, {"text": "The Cram\u00e9r\u2013Rao bound can also be used to bound the variance of biased estimators of given bias. In some cases, a biased approach can result in both a variance and a mean squared error that are below the unbiased Cram\u00e9r\u2013Rao lower bound; see estimator bias."}, {"text": "Two naturally desirable properties of estimators are for them to be unbiased and have minimal mean squared error (MSE). These cannot in general both be satisfied simultaneously: a biased estimator may have lower mean squared error (MSE) than any unbiased estimator; see estimator bias."}, {"text": "Efficiency in statistics is important because they allow one to compare the performance of various estimators. Although an unbiased estimator is usually favored over a biased one, a more efficient biased estimator can sometimes be more valuable than a less efficient unbiased estimator. For example, this can occur when the values of the biased estimator gathers around a number closer to the true value."}, {"text": "Efficiency in statistics is important because they allow one to compare the performance of various estimators. Although an unbiased estimator is usually favored over a biased one, a more efficient biased estimator can sometimes be more valuable than a less efficient unbiased estimator. For example, this can occur when the values of the biased estimator gathers around a number closer to the true value."}, {"text": "Efficiency in statistics is important because they allow one to compare the performance of various estimators. Although an unbiased estimator is usually favored over a biased one, a more efficient biased estimator can sometimes be more valuable than a less efficient unbiased estimator. For example, this can occur when the values of the biased estimator gathers around a number closer to the true value."}]}, {"question": "Does the data follow a normal distribution", "positive_ctxs": [{"text": "Many everyday data sets typically follow a normal distribution: for example, the heights of adult humans, the scores on a test given to a large class, errors in measurements. The normal distribution is always symmetrical about the mean."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The main difference between the two approaches is that the GLM strictly assumes that the residuals will follow a conditionally normal distribution, while the GLiM loosens this assumption and allows for a variety of other distributions from the exponential family for the residuals. Of note, the GLM is a special case of the GLiM in which the distribution of the residuals follow a conditionally normal distribution."}, {"text": "The main difference between the two approaches is that the GLM strictly assumes that the residuals will follow a conditionally normal distribution, while the GLiM loosens this assumption and allows for a variety of other distributions from the exponential family for the residuals. Of note, the GLM is a special case of the GLiM in which the distribution of the residuals follow a conditionally normal distribution."}, {"text": "An alternative parametric approach is to assume that the residuals follow a mixture of normal distributions (Daemi et al. 2019); in particular, a contaminated normal distribution in which the majority of observations are from a specified normal distribution, but a small proportion are from a normal distribution with much higher variance. That is, residuals have probability"}, {"text": "Derive the distribution of the test statistic under the null hypothesis from the assumptions. In standard cases this will be a well-known result. For example, the test statistic might follow a Student's t distribution with known degrees of freedom, or a normal distribution with known mean and variance."}, {"text": "Derive the distribution of the test statistic under the null hypothesis from the assumptions. In standard cases this will be a well-known result. For example, the test statistic might follow a Student's t distribution with known degrees of freedom, or a normal distribution with known mean and variance."}, {"text": "Derive the distribution of the test statistic under the null hypothesis from the assumptions. In standard cases this will be a well-known result. For example, the test statistic might follow a Student's t distribution with known degrees of freedom, or a normal distribution with known mean and variance."}, {"text": "Derive the distribution of the test statistic under the null hypothesis from the assumptions. In standard cases this will be a well-known result. For example, the test statistic might follow a Student's t distribution with known degrees of freedom, or a normal distribution with known mean and variance."}]}, {"question": "What is a simple linear regression model", "positive_ctxs": [{"text": "Simple linear regression is a regression model that estimates the relationship between one independent variable and one dependent variable using a straight line. Both variables should be quantitative."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}]}, {"question": "Why is my validation loss lower than training loss", "positive_ctxs": [{"text": "The second reason you may see validation loss lower than training loss is due to how the loss value are measured and reported: Training loss is measured during each epoch. While validation loss is measured after each epoch."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In machine learning, the hinge loss is a loss function used for training classifiers. The hinge loss is used for \"maximum-margin\" classification, most notably for support vector machines (SVMs).For an intended output t = \u00b11 and a classifier score y, the hinge loss of the prediction y is defined as"}, {"text": "In machine learning, the hinge loss is a loss function used for training classifiers. The hinge loss is used for \"maximum-margin\" classification, most notably for support vector machines (SVMs).For an intended output t = \u00b11 and a classifier score y, the hinge loss of the prediction y is defined as"}, {"text": "The use of a quadratic loss function is common, for example when using least squares techniques. It is often more mathematically tractable than other loss functions because of the properties of variances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target is t, then a quadratic loss function is"}, {"text": "The use of a quadratic loss function is common, for example when using least squares techniques. It is often more mathematically tractable than other loss functions because of the properties of variances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target is t, then a quadratic loss function is"}, {"text": "The square loss function is both convex and smooth. However, the square loss function tends to penalize outliers excessively, leading to slower convergence rates (with regards to sample complexity) than for the logistic loss or hinge loss functions. In addition, functions which yield high values of"}, {"text": "In statistics, the Huber loss is a loss function used in robust regression, that is less sensitive to outliers in data than the squared error loss. A variant for classification is also sometimes used."}, {"text": "The logistic loss is sometimes called cross-entropy loss. It is also known as log loss (In this case, the binary label is often denoted by {-1,+1}).Remark: The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared error loss for Linear regression."}]}, {"question": "Is Random Forest generative or discriminative", "positive_ctxs": [{"text": "In other words, discriminative models are used to specify outputs based on inputs (by models such as Logistic regression, Neural networks and Random forests), while generative models generate both inputs and outputs (for example, by Hidden Markov model, Bayesian Networks and Gaussian mixture model)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Classifiers computed without using a probability model are also referred to loosely as \"discriminative\".The distinction between these last two classes is not consistently made; Jebara (2004) refers to these three classes as generative learning, conditional learning, and discriminative learning, but Ng & Jordan (2002) only distinguish two classes, calling them generative classifiers (joint distribution) and discriminative classifiers (conditional distribution or no distribution), not distinguishing between the latter two classes. Analogously, a classifier based on a generative model is a generative classifier, while a classifier based on a discriminative model is a discriminative classifier, though this term also refers to classifiers that are not based on a model."}, {"text": "In addition, most discriminative models are inherently supervised and cannot easily support unsupervised learning. Application-specific details ultimately dictate the suitability of selecting a discriminative versus generative model."}, {"text": "The generative network generates candidates while the discriminative network evaluates them. The contest operates in terms of data distributions. Typically, the generative network learns to map from a latent space to a data distribution of interest, while the discriminative network distinguishes candidates produced by the generator from the true data distribution."}, {"text": "The generative network generates candidates while the discriminative network evaluates them. The contest operates in terms of data distributions. Typically, the generative network learns to map from a latent space to a data distribution of interest, while the discriminative network distinguishes candidates produced by the generator from the true data distribution."}, {"text": "The generative network generates candidates while the discriminative network evaluates them. The contest operates in terms of data distributions. Typically, the generative network learns to map from a latent space to a data distribution of interest, while the discriminative network distinguishes candidates produced by the generator from the true data distribution."}, {"text": "The generative network generates candidates while the discriminative network evaluates them. The contest operates in terms of data distributions. Typically, the generative network learns to map from a latent space to a data distribution of interest, while the discriminative network distinguishes candidates produced by the generator from the true data distribution."}, {"text": "Typical discriminative models include logistic regression (LR), conditional random fields (CRFs) (specified over an undirected graph), decision trees, and many others. Typical generative model approaches include naive Bayes classifiers, Gaussian mixture models, variational autoencoders, generative adversarial networks and others."}]}, {"question": "Why do we use wavelet transform", "positive_ctxs": [{"text": "The ability to slide the signal is the what gives Engineers a more accurate representation of the signal and therefore a better resolution in time. So when you use a Wavelet Transform the signal is deconstructed using the same wavelet at different scales, rather than the same sin() wave at different frequencies."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In recent work on model-free analyses, wavelet transform based methods (for example locally stationary wavelets and wavelet decomposed neural networks) have gained favor. Multiscale (often referred to as multiresolution) techniques decompose a given time series, attempting to illustrate time dependence at multiple scales. See also Markov switching multifractal (MSMF) techniques for modeling volatility evolution."}, {"text": "In recent work on model-free analyses, wavelet transform based methods (for example locally stationary wavelets and wavelet decomposed neural networks) have gained favor. Multiscale (often referred to as multiresolution) techniques decompose a given time series, attempting to illustrate time dependence at multiple scales. See also Markov switching multifractal (MSMF) techniques for modeling volatility evolution."}, {"text": "In recent work on model-free analyses, wavelet transform based methods (for example locally stationary wavelets and wavelet decomposed neural networks) have gained favor. Multiscale (often referred to as multiresolution) techniques decompose a given time series, attempting to illustrate time dependence at multiple scales. See also Markov switching multifractal (MSMF) techniques for modeling volatility evolution."}, {"text": "These metaphors are prevalent in communication and we do not just use them in language; we actually perceive and act in accordance with the metaphors."}, {"text": "The inner product facilitates the construction of many useful concepts. For instance, given a transform T, we can define its Hermitian conjugate T* as the linear transform satisfying"}, {"text": "One of the most important applications of sparse dictionary learning is in the field of compressed sensing or signal recovery. In compressed sensing, a high-dimensional signal can be recovered with only a few linear measurements provided that the signal is sparse or nearly sparse. Since not all signals satisfy this sparsity condition, it is of great importance to find a sparse representation of that signal such as the wavelet transform or the directional gradient of a rasterized matrix."}, {"text": "One of the most important applications of sparse dictionary learning is in the field of compressed sensing or signal recovery. In compressed sensing, a high-dimensional signal can be recovered with only a few linear measurements provided that the signal is sparse or nearly sparse. Since not all signals satisfy this sparsity condition, it is of great importance to find a sparse representation of that signal such as the wavelet transform or the directional gradient of a rasterized matrix."}]}, {"question": "How do you do stratified sampling", "positive_ctxs": [{"text": "To create a stratified random sample, there are seven steps: (a) defining the population; (b) choosing the relevant stratification; (c) listing the population; (d) listing the population according to the chosen stratification; (e) choosing your sample size; (f) calculating a proportionate stratification; and (g) using"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "convergence\u2014i.e., quadrupling the number of sampled points halves the error, regardless of the number of dimensions.A refinement of this method, known as importance sampling in statistics, involves sampling the points randomly, but more frequently where the integrand is large. To do this precisely one would have to already know the integral, but one can approximate the integral by an integral of a similar function or use adaptive routines such as stratified sampling, recursive stratified sampling, adaptive umbrella sampling or the VEGAS algorithm."}, {"text": "convergence\u2014i.e., quadrupling the number of sampled points halves the error, regardless of the number of dimensions.A refinement of this method, known as importance sampling in statistics, involves sampling the points randomly, but more frequently where the integrand is large. To do this precisely one would have to already know the integral, but one can approximate the integral by an integral of a similar function or use adaptive routines such as stratified sampling, recursive stratified sampling, adaptive umbrella sampling or the VEGAS algorithm."}, {"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}]}, {"question": "What is weighted average with example", "positive_ctxs": [{"text": "A method of computing a kind of arithmetic mean of a set of numbers in which some elements of the set carry more importance (weight) than others. Example: Grades are often computed using a weighted average. Suppose that homework counts 10%, quizzes 20%, and tests 70%."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A weighted average is an average that has multiplying factors to give different weights to data at different positions in the sample window. Mathematically, the weighted moving average is the convolution of the datum points with a fixed weighting function. One application is removing pixelisation from a digital graphical image.In technical analysis of financial data, a weighted moving average (WMA) has the specific meaning of weights that decrease in arithmetical progression."}, {"text": "= the mean vote across the whole pool (currently 7.0)Note that W is just the weighted arithmetic mean of R and C with weight vector (v, m). As the number of ratings surpasses m, the confidence of the average rating surpasses the confidence of the prior knowledge, and the weighted bayesian rating (W) approaches a straight average (R). The closer v (the number of ratings for the film) is to zero, the closer W gets to C, where W is the weighted rating and C is the average rating of all films."}, {"text": "This is equivalent to adding C data points of value m to the data set. It is a weighted average of a prior average m and the sample average."}, {"text": "An exponential moving average (EMA), also known as an exponentially weighted moving average (EWMA), is a first-order infinite impulse response filter that applies weighting factors which decrease exponentially. The weighting for each older datum decreases exponentially, never reaching zero. The graph at right shows an example of the weight decrease."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "This simple form of exponential smoothing is also known as an exponentially weighted moving average (EWMA). Technically it can also be classified as an autoregressive integrated moving average (ARIMA) (0,1,1) model with no constant term."}, {"text": "However, if the vehicle travels for a certain amount of time at a speed x and then the same amount of time at a speed y, then its average speed is the arithmetic mean of x and y, which in the above example is 40 km/h. The same principle applies to more than two segments: given a series of sub-trips at different speeds, if each sub-trip covers the same distance, then the average speed is the harmonic mean of all the sub-trip speeds; and if each sub-trip takes the same amount of time, then the average speed is the arithmetic mean of all the sub-trip speeds. (If neither is the case, then a weighted harmonic mean or weighted arithmetic mean is needed."}]}, {"question": "What does marginal distribution mean", "positive_ctxs": [{"text": "In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variables in the subset without reference to the values of the other variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "simply sets the overall scaling of the distribution. In the Bayesian derivation of the marginal distribution of an unknown normal mean"}, {"text": "Let X1, X2 be random variables with identical marginal distributions with mean \u03bc. In this formalization, the bivariate distribution of X1 and X2 is said to exhibit regression toward the mean if, for every number c > \u03bc, we have"}, {"text": "Being a less restrictive approach, regression towards the mean can be defined for any bivariate distribution with identical marginal distributions. One definition accords closely with the common usage of the term \"regression towards the mean\". Not all such bivariate distributions show regression towards the mean under this definition."}, {"text": "The following definition of reversion toward the mean has been proposed by Samuels as an alternative to the more restrictive definition of regression toward the mean above.Let X1, X2 be random variables with identical marginal distributions with mean \u03bc. In this formalization, the bivariate distribution of X1 and X2 is said to exhibit reversion toward the mean if, for every number c, we have"}, {"text": "Moreover, the final row and the final column give the marginal probability distribution for A and the marginal probability distribution for B respectively. For example, for A the first of these cells gives the sum of the probabilities for A being red, regardless of which possibility for B in the column above the cell occurs, as 2/3. Thus the marginal probability distribution for"}, {"text": "This definition accords closely with the current common usage, evolved from Galton's original usage, of the term \"regression toward the mean.\" It is \"restrictive\" in the sense that not every bivariate distribution with identical marginal distributions exhibits regression toward the mean (under this definition)."}, {"text": "If more than one random variable is defined in a random experiment, it is important to distinguish between the joint probability distribution of X and Y and the probability distribution of each variable individually. The individual probability distribution of a random variable is referred to as its marginal probability distribution. In general, the marginal probability distribution of X can be determined from the joint probability distribution of X and other random variables."}]}, {"question": "What is conditional probability explain with an example", "positive_ctxs": [{"text": "Conditional probability is the probability of one event occurring with some relationship to one or more other events. For example: Event A is that it is raining outside, and it has a 0.3 (30%) chance of raining today. Event B is that you will need to go outside, and that has a probability of 0.5 (50%)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "is also a probability measure for all \u03c9 \u2208 \u03a9. An expectation of a random variable with respect to a regular conditional probability is equal to its conditional expectation."}, {"text": "What is the probability of winning the car given the player has picked door 1 and the host has opened door 3?The answer to the first question is 2/3, as is correctly shown by the \"simple\" solutions. But the answer to the second question is now different: the conditional probability the car is behind door 1 or door 2 given the host has opened door 3 (the door on the right) is 1/2. This is because Monty's preference for rightmost doors means that he opens door 3 if the car is behind door 1 (which it is originally with probability 1/3) or if the car is behind door 2 (also originally with probability 1/3)."}, {"text": "What emerges then is that info-gap theory is yet to explain in what way, if any, it actually attempts to deal with the severity of the uncertainty under consideration. Subsequent sections of this article will address this severity issue and its methodological and practical implications."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "are categorical variables, a conditional probability table is typically used to represent the conditional probability. The conditional distribution contrasts with the marginal distribution of a random variable, which is its distribution without reference to the value of the other variable."}]}, {"question": "How can neural networks be improved", "positive_ctxs": [{"text": "Now we'll check out the proven way to improve the performance(Speed and Accuracy both) of neural network models:Increase hidden Layers. Change Activation function. Change Activation function in Output layer. Increase number of neurons. Weight initialization. More data. Normalizing/Scaling data.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The experiments noted that the accuracy of neural networks and convolutional neural networks were improved through transfer learning both at the first epoch (prior to any learning, ie. compared to standard random weight distribution) and at the asymptote (the end of the learning process). That is, algorithms are improved by exposure to another domain."}, {"text": "In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49% through CTC-trained LSTM.LSTM broke records for improved machine translation, Language Modeling and Multilingual Language Processing. LSTM combined with convolutional neural networks (CNNs) improved automatic image captioning. Given the computation and memory overheads of running LSTMs, there have been efforts on accelerating LSTM using hardware accelerators."}, {"text": "In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49% through CTC-trained LSTM.LSTM broke records for improved machine translation, Language Modeling and Multilingual Language Processing. LSTM combined with convolutional neural networks (CNNs) improved automatic image captioning. Given the computation and memory overheads of running LSTMs, there have been efforts on accelerating LSTM using hardware accelerators."}, {"text": "In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49% through CTC-trained LSTM.LSTM broke records for improved machine translation, Language Modeling and Multilingual Language Processing. LSTM combined with convolutional neural networks (CNNs) improved automatic image captioning. Given the computation and memory overheads of running LSTMs, there have been efforts on accelerating LSTM using hardware accelerators."}, {"text": "In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49% through CTC-trained LSTM.LSTM broke records for improved machine translation, Language Modeling and Multilingual Language Processing. LSTM combined with convolutional neural networks (CNNs) improved automatic image captioning. Given the computation and memory overheads of running LSTMs, there have been efforts on accelerating LSTM using hardware accelerators."}, {"text": "In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49% through CTC-trained LSTM.LSTM broke records for improved machine translation, Language Modeling and Multilingual Language Processing. LSTM combined with convolutional neural networks (CNNs) improved automatic image captioning. Given the computation and memory overheads of running LSTMs, there have been efforts on accelerating LSTM using hardware accelerators."}, {"text": "Recurrent neural networks are generally considered the best neural network architectures for time series forecasting (and sequence modeling in general), but recent studies show that convolutional networks can perform comparably or even better. Dilated convolutions might enable one-dimensional convolutional neural networks to effectively learn time series dependences. Convolutions can be implemented more efficiently than RNN-based solutions, and they do not suffer from vanishing (or exploding) gradients."}]}, {"question": "What is probability and random process", "positive_ctxs": [{"text": "In probability theory and related fields, a stochastic or random process is a mathematical object usually defined as a family of random variables. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The mathematical statement of this problem is as follows: pick a random permutation on n elements and k values from the range 1 to n, also at random, call these marks. What is the probability that there is at least one mark on every cycle of the permutation? The claim is this probability is k/n."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "In probability theory and related fields, a stochastic or random process is a mathematical object usually defined as a family of random variables. Many stochastic processes can be represented by time series. However, a stochastic process is by nature continuous while a time series is a set of observations indexed by integers."}, {"text": "In probability theory and related fields, a stochastic or random process is a mathematical object usually defined as a family of random variables. Many stochastic processes can be represented by time series. However, a stochastic process is by nature continuous while a time series is a set of observations indexed by integers."}, {"text": "If more than one random variable is defined in a random experiment, it is important to distinguish between the joint probability distribution of X and Y and the probability distribution of each variable individually. The individual probability distribution of a random variable is referred to as its marginal probability distribution. In general, the marginal probability distribution of X can be determined from the joint probability distribution of X and other random variables."}, {"text": "In statistics, a simple random sample is a subset of individuals (a sample) chosen from a larger set (a population). Each individual is chosen randomly and entirely by chance, such that each individual has the same probability of being chosen at any stage during the sampling process, and each subset of k individuals has the same probability of being chosen for the sample as any other subset of k individuals. This process and technique is known as simple random sampling, and should not be confused with systematic random sampling."}, {"text": "In statistics, a simple random sample is a subset of individuals (a sample) chosen from a larger set (a population). Each individual is chosen randomly and entirely by chance, such that each individual has the same probability of being chosen at any stage during the sampling process, and each subset of k individuals has the same probability of being chosen for the sample as any other subset of k individuals. This process and technique is known as simple random sampling, and should not be confused with systematic random sampling."}]}, {"question": "What is the MN rule in statistics", "positive_ctxs": [{"text": "The mn Rule Consider an experiment that is performed in two stages. If the first stage can be accomplished in m different ways and for each of these ways, the second stage can be accomplished in n different ways, then there are to- tal mn different ways to accomplish the experiment."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}, {"text": "A critical concept in LCS and rule-based machine learning alike, is that an individual rule is not in itself a model, since the rule is only applicable when its condition is satisfied. Think of a rule as a \"local-model\" of the solution space."}]}, {"question": "Is recurrent neural networks are best suited for text processing", "positive_ctxs": [{"text": "Recurrent Neural Networks (RNNs) are a form of machine learning algorithm that are ideal for sequential data such as text, time series, financial data, speech, audio, video among others."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "a literal \"bucket of water\" can serve as a reservoir that performs computations on inputs given as perturbations of the surface. The resultant complexity of such recurrent neural networks was found to be useful in solving a variety of problems including language processing and dynamic system modeling. However, training of recurrent neural networks is challenging and computationally expensive."}, {"text": "Learning algorithm: Different networks modify their connections differently. In general, any mathematically defined change in connection weights over time is referred to as the \"learning algorithm\".Connectionists are in agreement that recurrent neural networks (directed networks wherein connections of the network can form a directed cycle) are a better model of the brain than feedforward neural networks (directed networks with no cycles, called DAG). Many recurrent connectionist models also incorporate dynamical systems theory."}, {"text": "Recurrent neural networks are recursive artificial neural networks with a certain structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step."}, {"text": "The Transformer is a deep learning model introduced in 2017, used primarily in the field of natural language processing (NLP).Like recurrent neural networks (RNNs), Transformers are designed to handle sequential data, such as natural language, for tasks such as translation and text summarization. However, unlike RNNs, Transformers do not require that the sequential data be processed in order. For example, if the input data is a natural language sentence, the Transformer does not need to process the beginning of it before the end."}, {"text": "The Transformer is a deep learning model introduced in 2017, used primarily in the field of natural language processing (NLP).Like recurrent neural networks (RNNs), Transformers are designed to handle sequential data, such as natural language, for tasks such as translation and text summarization. However, unlike RNNs, Transformers do not require that the sequential data be processed in order. For example, if the input data is a natural language sentence, the Transformer does not need to process the beginning of it before the end."}, {"text": "The Transformer is a deep learning model introduced in 2017, used primarily in the field of natural language processing (NLP).Like recurrent neural networks (RNNs), Transformers are designed to handle sequential data, such as natural language, for tasks such as translation and text summarization. However, unlike RNNs, Transformers do not require that the sequential data be processed in order. For example, if the input data is a natural language sentence, the Transformer does not need to process the beginning of it before the end."}, {"text": "They are in fact recursive neural networks with a particular structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step."}]}, {"question": "Where is a bootstrap distribution centered", "positive_ctxs": [{"text": "3.1 . Each bootstrap distribution is centered at the statistic from the corresponding sample rather than at the population mean \u03bc."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This sampling process is repeated many times as for other bootstrap methods. Considering the centered sample mean in this case, the random sample original distribution function"}, {"text": "for sample size n.Histograms of the bootstrap distribution and the smooth bootstrap distribution appear below. The bootstrap distribution of the sample-median has only a small number of values. The smoothed bootstrap distribution has a richer support."}, {"text": "Another example of the same phenomena is the case when the prior estimate and a measurement are normally distributed. If the prior is centered at B with deviation \u03a3, and the measurement is centered at b with deviation \u03c3, then the posterior is centered at"}, {"text": "The equidensity contours of a non-singular multivariate normal distribution are ellipsoids (i.e. linear transformations of hyperspheres) centered at the mean. Hence the multivariate normal distribution is an example of the class of elliptical distributions."}, {"text": "Under certain assumptions, the OLS estimator has a normal asymptotic distribution when properly normalized and centered (even when the data does not come from a normal distribution). This result is used to justify using a normal distribution, or a chi square distribution (depending on how the test statistic is calculated), when conducting a hypothesis test. This holds even under heteroscedasticity."}, {"text": "The bootstrap distribution for Newcomb's data appears below. A convolution method of regularization reduces the discreteness of the bootstrap distribution by adding a small amount of N(0, \u03c32) random noise to each bootstrap sample."}, {"text": "Basic bootstrap, also known as the Reverse Percentile Interval. The basic bootstrap is a simple scheme to construct the confidence interval: one simply takes the empirical quantiles from the bootstrap distribution of the parameter (see Davison and Hinkley 1997, equ."}]}, {"question": "Who has beaten AlphaGo", "positive_ctxs": [{"text": "DeepMind"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In October 2015, in a match against Fan Hui, the original AlphaGo became the first computer Go program to beat a human professional Go player without handicap on a full-sized 19\u00d719 board. In March 2016, it beat Lee Sedol in a five-game match, the first time a computer Go program has beaten a 9-dan professional without handicap. Although it lost to Lee Sedol in the fourth game, Lee resigned in the final game, giving a final score of 4 games to 1 in favour of AlphaGo."}, {"text": "In October 2015, the distributed version of AlphaGo defeated the European Go champion Fan Hui, a 2-dan (out of 9 dan possible) professional, five to zero. This was the first time a computer Go program had beaten a professional human player on a full-sized board without handicap. The announcement of the news was delayed until 27 January 2016 to coincide with the publication of a paper in the journal Nature describing the algorithms used."}, {"text": "IBM has created its own artificial intelligence computer, the IBM Watson, which has beaten human intelligence (at some levels). Watson has struggled to achieve success and adoption in healthcare.Artificial neural networks are used as clinical decision support systems for medical diagnosis, such as in concept processing technology in EMR software."}, {"text": "IBM has created its own artificial intelligence computer, the IBM Watson, which has beaten human intelligence (at some levels). Watson has struggled to achieve success and adoption in healthcare.Artificial neural networks are used as clinical decision support systems for medical diagnosis, such as in concept processing technology in EMR software."}, {"text": "The competition began on October 2, 2006. By October 8, a team called WXYZConsulting had already beaten Cinematch's results.By October 15, there were three teams who had beaten Cinematch, one of them by 1.06%, enough to qualify for the annual progress prize. By June 2007 over 20,000 teams had registered for the competition from over 150 countries."}, {"text": "In May 2016, Google unveiled its own proprietary hardware \"tensor processing units\", which it stated had already been deployed in multiple internal projects at Google, including the AlphaGo match against Lee Sedol.In the Future of Go Summit in May 2017, DeepMind disclosed that the version of AlphaGo used in this Summit was AlphaGo Master, and revealed that it had measured the strength of different versions of the software. AlphaGo Lee, the version used against Lee, could give AlphaGo Fan, the version used in AlphaGo vs. Fan Hui, three stones, and AlphaGo Master was even three stones stronger."}, {"text": "AlphaGo appeared to have unexpectedly become much stronger, even when compared with its October 2015 match where a computer had beaten a Go professional for the first time ever without the advantage of a handicap. The day after Lee's first defeat, Jeong Ahram, the lead Go correspondent for one of South Korea's biggest daily newspapers, said \"Last night was very gloomy... The Korea Baduk Association, the organization that oversees Go professionals in South Korea, awarded AlphaGo an honorary 9-dan title for exhibiting creative skills and pushing forward the game's progress.China's Ke Jie, an 18-year-old generally recognized as the world's best Go player at the time, initially claimed that he would be able to beat AlphaGo, but declined to play against it for fear that it would \"copy my style\"."}]}, {"question": "Who proved the central limit theorem", "positive_ctxs": [{"text": "Pierre-Simon Laplace"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Here, the central limit theorem states that the distribution of the sample mean \"for very large samples\" is approximately normally distributed, if the distribution is not heavy tailed."}, {"text": "Here, the central limit theorem states that the distribution of the sample mean \"for very large samples\" is approximately normally distributed, if the distribution is not heavy tailed."}, {"text": "The convergence of a random walk toward the Wiener process is controlled by the central limit theorem, and by Donsker's theorem. For a particle in a known fixed position at t = 0, the central limit theorem tells us that after a large number of independent steps in the random walk, the walker's position is distributed according to a normal distribution of total variance:"}, {"text": "As a direct generalization, one can consider random walks on crystal lattices (infinite-fold abelian covering graphs over finite graphs). Actually it is possible to establish the central limit theorem and large deviation theorem in this setting."}, {"text": "This expression describes quantitatively how the estimate becomes more precise as the sample size increases. Using the central limit theorem to justify approximating the sample mean with a normal distribution yields a confidence interval of the form"}, {"text": "This expression describes quantitatively how the estimate becomes more precise as the sample size increases. Using the central limit theorem to justify approximating the sample mean with a normal distribution yields a confidence interval of the form"}, {"text": "The drawback is that the central limit theorem is applicable when the sample size is sufficiently large. Therefore, it is less and less applicable with the sample involved in modern inference instances. The fault is not in the sample size on its own part."}]}, {"question": "Evolutionary Computation Estimation of Distribution Algorithm EDA", "positive_ctxs": [{"text": "The estimation of distribution algorithm (EDA) aims to explicitly model the probability distribution of the quality solutions to the underlying problem. By iterative filtering for quality solution from competing ones, the probability model eventually approximates the distribution of global optimum solutions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Estimation of Distribution Algorithm (EDA) substitutes traditional reproduction operators by model-guided operators. Such models are learned from the population by employing machine learning techniques and represented as Probabilistic Graphical Models, from which new solutions can be sampled or generated from guided-crossover."}, {"text": "Gill, P. M. W. (2007). \"Efficient calculation of p-values in linear-statistic permutation significance tests\" (PDF). Journal of Statistical Computation and Simulation."}, {"text": "Gill, P. M. W. (2007). \"Efficient calculation of p-values in linear-statistic permutation significance tests\" (PDF). Journal of Statistical Computation and Simulation."}, {"text": "Gill, P. M. W. (2007). \"Efficient calculation of p-values in linear-statistic permutation significance tests\" (PDF). Journal of Statistical Computation and Simulation."}, {"text": "Gill, P. M. W. (2007). \"Efficient calculation of p-values in linear-statistic permutation significance tests\" (PDF). Journal of Statistical Computation and Simulation."}, {"text": "Gill, P. M. W. (2007). \"Efficient calculation of p-values in linear-statistic permutation significance tests\" (PDF). Journal of Statistical Computation and Simulation."}, {"text": "\"The Brain, Its Sensory Order and the Evolutionary Concept of Mind, On Hayek's Contribution to Evolutionary Epistemology\". Journal for Social and Biological Structures."}]}, {"question": "What is the role of artificial intelligence in the shaping modern society", "positive_ctxs": [{"text": "Artificial intelligence can dramatically improve the efficiencies of our workplaces and can augment the work humans can do. When AI takes over repetitive or dangerous tasks, it frees up the human workforce to do work they are better equipped for\u2014tasks that involve creativity and empathy among others."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Linear regression plays an important role in the field of artificial intelligence such as machine learning. The linear regression algorithm is one of the fundamental supervised machine-learning algorithms due to its relative simplicity and well-known properties."}, {"text": "Linear regression plays an important role in the field of artificial intelligence such as machine learning. The linear regression algorithm is one of the fundamental supervised machine-learning algorithms due to its relative simplicity and well-known properties."}, {"text": "Linear regression plays an important role in the field of artificial intelligence such as machine learning. The linear regression algorithm is one of the fundamental supervised machine-learning algorithms due to its relative simplicity and well-known properties."}, {"text": "Linear regression plays an important role in the field of artificial intelligence such as machine learning. The linear regression algorithm is one of the fundamental supervised machine-learning algorithms due to its relative simplicity and well-known properties."}, {"text": "Linear regression plays an important role in the field of artificial intelligence such as machine learning. The linear regression algorithm is one of the fundamental supervised machine-learning algorithms due to its relative simplicity and well-known properties."}, {"text": "Linear regression plays an important role in the field of artificial intelligence such as machine learning. The linear regression algorithm is one of the fundamental supervised machine-learning algorithms due to its relative simplicity and well-known properties."}, {"text": "For example, because the product of characteristic functions \u03c61*\u03c62* ... *\u03c6n = 0 whenever any one of the functions equals 0, it plays the role of logical OR: IF \u03c61 = 0 OR \u03c62 = 0 OR ... OR \u03c6n = 0 THEN their product is 0. What appears to the modern reader as the representing function's logical inversion, i.e. the representing function is 0 when the function R is \"true\" or satisfied\", plays a useful role in Kleene's definition of the logical functions OR, AND, and IMPLY (p. 228), the bounded- (p. 228) and unbounded- (p. 279 ff) mu operators (Kleene (1952)) and the CASE function (p. 229)."}]}, {"question": "Is fastText deep learning", "positive_ctxs": [{"text": "Implementing Deep Learning Methods and Feature Engineering for Text Data: FastText. Overall, FastText is a framework for learning word representations and also performing robust, fast and accurate text classification. The framework is open-sourced by Facebook on GitHub."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "fastText is a library for learning of word embeddings and text classification created by Facebook's AI Research (FAIR) lab. The model allows one to create an unsupervised learning or supervised learning algorithm for obtaining vector representations for words. Facebook makes available pretrained models for 294 languages."}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "Consequential \u2013 What are the potential risks if the scores are invalid or inappropriately interpreted? Is the test still worthwhile given the risks?"}, {"text": "Consequential \u2013 What are the potential risks if the scores are invalid or inappropriately interpreted? Is the test still worthwhile given the risks?"}, {"text": "This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google DeepMind increased attention to deep reinforcement learning or end-to-end reinforcement learning."}, {"text": "This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google DeepMind increased attention to deep reinforcement learning or end-to-end reinforcement learning."}]}, {"question": "When would you use an exponential distribution", "positive_ctxs": [{"text": "The exponential distribution is often used to model the longevity of an electrical or mechanical device. In Example, the lifetime of a certain computer part has the exponential distribution with a mean of ten years (X\u223cExp(0.1))."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "The distribution of failure times is over-laid with a curve representing an exponential distribution. For this example, the exponential distribution approximates the distribution of failure times. The exponential curve is a theoretical distribution fitted to the actual failure times."}, {"text": "To calculate decimal odds, you can use the equation Return = Initial Wager x Decimal Value. For example, if you bet \u20ac100 on Liverpool to beat Manchester City at 2.00 odds you would win \u20ac200 (\u20ac100 x 2.00). Decimal odds are favoured by betting exchanges because they are the easiest to work with for trading, as they reflect the inverse of the probability of an outcome."}, {"text": "Like its continuous analogue (the exponential distribution), the geometric distribution is memoryless. That means that if you intend to repeat an experiment until the first success, then, given that the first success has not yet occurred, the conditional probability distribution of the number of additional trials does not depend on how many failures have been observed. The die one throws or the coin one tosses does not have a \"memory\" of these failures."}, {"text": "The conjugate prior for the exponential distribution is the gamma distribution (of which the exponential distribution is a special case). The following parameterization of the gamma probability density function is useful:"}, {"text": "As an example, suppose a linear prediction model learns from some data (perhaps primarily drawn from large beaches) that a 10 degree temperature decrease would lead to 1,000 fewer people visiting the beach. This model is unlikely to generalize well over different sized beaches. More specifically, the problem is that if you use the model to predict the new attendance with a temperature drop of 10 for a beach that regularly receives 50 beachgoers, you would predict an impossible attendance value of \u2212950."}]}, {"question": "What is entropy in layman's terms", "positive_ctxs": [{"text": "The definition is: \"Entropy is a measure of how evenly energy is distributed in a system. In a physical system, entropy provides a measure of the amount of energy that cannot be used to do work.\""}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The (continuous case) differential entropy was introduced by Shannon in his original paper (where he named it the \"entropy of a continuous distribution\"), as the concluding part of the same paper where he defined the discrete entropy. It is known since then that the differential entropy may differ from the infinitesimal limit of the discrete entropy by an infinite offset, therefore the differential entropy can be negative (as it is for the beta distribution). What really matters is the relative value of entropy."}, {"text": "When viewed in terms of information theory, the entropy state function is the amount of information (in the Shannon sense) in the system, that is needed to fully specify the microstate of the system. This is lacking in the macroscopic description."}, {"text": "In classical thermodynamics, entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which is central to the definition of information entropy."}, {"text": "The definition of the information entropy is, however, quite general, and is expressed in terms of a discrete set of probabilities pi so that"}, {"text": "The concept of entropy is described by two principal approaches, the macroscopic perspective of classical thermodynamics, and the microscopic description central to statistical mechanics. The classical approach defines entropy in terms of macroscopically measurable physical properties, such as bulk mass, volume, pressure, and temperature. The statistical definition of entropy defines it in terms of the statistics of the motions of the microscopic constituents of a system \u2013 modeled at first classically, e.g."}, {"text": "Boltzmann's constant, and therefore entropy, have dimensions of energy divided by temperature, which has a unit of joules per kelvin (J\u22c5K\u22121) in the International System of Units (or kg\u22c5m2\u22c5s\u22122\u22c5K\u22121 in terms of base units). The entropy of a substance is usually given as an intensive property \u2013 either entropy per unit mass (SI unit: J\u22c5K\u22121\u22c5kg\u22121) or entropy per unit amount of substance (SI unit: J\u22c5K\u22121\u22c5mol\u22121)."}, {"text": "The conditional quantum entropy is an entropy measure used in quantum information theory. It is a generalization of the conditional entropy of classical information theory."}]}, {"question": "How cut off value is calculated from ROC curve", "positive_ctxs": [{"text": "For this, you aim to maximize the Youden's index, which is Maximum=Sensitivity + Specificity - 1. So you choose those value of the ROC-curve as a cut-off, where the term \"Sensitivity + Specificity - 1\" (parameters taken from the output in the same line as the observed value, see attachments) is maximal."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It can also be thought of as a plot of the power as a function of the Type I Error of the decision rule (when the performance is calculated from just a sample of the population, it can be thought of as estimators of these quantities). The ROC curve is thus the sensitivity or recall as a function of fall-out. In general, if the probability distributions for both detection and false alarm are known, the ROC curve can be generated by plotting the cumulative distribution function (area under the probability distribution from"}, {"text": "It can also be thought of as a plot of the power as a function of the Type I Error of the decision rule (when the performance is calculated from just a sample of the population, it can be thought of as estimators of these quantities). The ROC curve is thus the sensitivity or recall as a function of fall-out. In general, if the probability distributions for both detection and false alarm are known, the ROC curve can be generated by plotting the cumulative distribution function (area under the probability distribution from"}, {"text": "It can also be thought of as a plot of the power as a function of the Type I Error of the decision rule (when the performance is calculated from just a sample of the population, it can be thought of as estimators of these quantities). The ROC curve is thus the sensitivity or recall as a function of fall-out. In general, if the probability distributions for both detection and false alarm are known, the ROC curve can be generated by plotting the cumulative distribution function (area under the probability distribution from"}, {"text": "It can also be thought of as a plot of the power as a function of the Type I Error of the decision rule (when the performance is calculated from just a sample of the population, it can be thought of as estimators of these quantities). The ROC curve is thus the sensitivity or recall as a function of fall-out. In general, if the probability distributions for both detection and false alarm are known, the ROC curve can be generated by plotting the cumulative distribution function (area under the probability distribution from"}, {"text": "It can also be thought of as a plot of the power as a function of the Type I Error of the decision rule (when the performance is calculated from just a sample of the population, it can be thought of as estimators of these quantities). The ROC curve is thus the sensitivity or recall as a function of fall-out. In general, if the probability distributions for both detection and false alarm are known, the ROC curve can be generated by plotting the cumulative distribution function (area under the probability distribution from"}, {"text": "It can also be thought of as a plot of the power as a function of the Type I Error of the decision rule (when the performance is calculated from just a sample of the population, it can be thought of as estimators of these quantities). The ROC curve is thus the sensitivity or recall as a function of fall-out. In general, if the probability distributions for both detection and false alarm are known, the ROC curve can be generated by plotting the cumulative distribution function (area under the probability distribution from"}, {"text": "The authors recommended a cut off value of 1.5 with B being greater than 1.5 for a bimodal distribution and less than 1.5 for a unimodal distribution. No statistical justification for this value was given."}]}, {"question": "What standard error tells us", "positive_ctxs": [{"text": "The standard error tells you how accurate the mean of any given sample from that population is likely to be compared to the true population mean. When the standard error increases, i.e. the means are more spread out, it becomes more likely that any given mean is an inaccurate representation of the true population mean."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "where the operator E denotes the expected value. Convergence in r-th mean tells us that the expectation of the r-th power of the difference between"}, {"text": "; however, for many signals of interest the Fourier transform does not formally exist. Regardless, Parseval's Theorem tells us that we can re-write the average power as follows."}, {"text": "In that case the \"failure\" of a MaxEnt prediction tells us that there is something more which is relevant that we may have overlooked in the physics of the system."}, {"text": "This tells us that the electric charge (which is the coupling parameter in the theory) increases with increasing energy. Therefore, while the quantized electromagnetic field without charged particles is scale-invariant, QED is not scale-invariant."}, {"text": "It is very similar to the Z-score but with the difference that t-statistic is used when the sample size is small or the population standard deviation is unknown. For example, the t-statistic is used in estimating the population mean from a sampling distribution of sample means if the population standard deviation is unknown. It is also used along with p-value when running hypothesis tests where the p-value tells us what the odds are of the results to have happened."}, {"text": "Statistical theory tells us about the uncertainties in extrapolating from a sample to the frame. It should be expected that sample frames, will always contain some mistakes. In some cases, this may lead to sampling bias."}, {"text": "In the example above, the confidence interval only tells us that there is roughly a 50% chance that the p-value is smaller than 0.05, i.e. it is completely unclear whether the null hypothesis should be rejected at a level"}]}, {"question": "What is the purpose of measuring the validity of a test", "positive_ctxs": [{"text": "Validity is important because it can help determine what types of tests to use, and help to make sure researchers are using methods that are not only ethical, and cost-effective, but also a method that truly measures the idea or constructs in question."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Construct validity is \"the degree to which a test measures what it claims, or purports, to be measuring.\" In the classical model of test validity, construct validity is one of three main types of validity evidence, alongside content validity and criterion validity. Modern validity theory defines construct validity as the overarching concern of validity research, subsuming all other types of validity evidence.Construct validity is the appropriateness of inferences made on the basis of observations or measurements (often test scores), specifically whether a test measures the intended construct."}, {"text": "Construct validity is \"the degree to which a test measures what it claims, or purports, to be measuring.\" In the classical model of test validity, construct validity is one of three main types of validity evidence, alongside content validity and criterion validity. Modern validity theory defines construct validity as the overarching concern of validity research, subsuming all other types of validity evidence.Construct validity is the appropriateness of inferences made on the basis of observations or measurements (often test scores), specifically whether a test measures the intended construct."}, {"text": "Predictive validity shares similarities with concurrent validity in that both are generally measured as correlations between a test and some criterion measure. In a study of concurrent validity the test is administered at the same time as the criterion is collected. This is a common method of developing validity evidence for employment tests: A test is administered to incumbent employees, then a rating of those employees' job performance is, or has already been, obtained independently of the test (often, as noted above, in the form of a supervisor rating)."}, {"text": "Face validity is an estimate of whether a test appears to measure a certain criterion; it does not guarantee that the test actually measures phenomena in that domain. Measures may have high validity, but when the test does not appear to be measuring what it is, it has low face validity. Indeed, when a test is subject to faking (malingering), low face validity might make the test more valid."}, {"text": "While reliability does not imply validity, reliability does place a limit on the overall validity of a test. A test that is not perfectly reliable cannot be perfectly valid, either as a means of measuring attributes of a person or as a means of predicting scores on a criterion. While a reliable test may provide useful valid information, a test that is not reliable cannot possibly be valid.For example, if a set of weighing scales consistently measured the weight of an object as 500 grams over the true weight, then the scale would be very reliable, but it would not be valid (as the returned weight is not the true weight)."}, {"text": "In psychometrics, predictive validity is the extent to which a score on a scale or test predicts scores on some criterion measure.For example, the validity of a cognitive test for job performance is the correlation between test scores and, for example, supervisor performance ratings. Such a cognitive test would have predictive validity if the observed correlation were statistically significant."}, {"text": "In psychometrics, validity has a particular application known as test validity: \"the degree to which evidence and theory support the interpretations of test scores\" (\"as entailed by proposed uses of tests\").It is generally accepted that the concept of scientific validity addresses the nature of reality in terms of statistical measures and as such is an epistemological and philosophical issue as well as a question of measurement. The use of the term in logic is narrower, relating to the relationship between the premises and conclusion of an argument. In logic, validity refers to the property of an argument whereby if the premises are true then the truth of the conclusion follows by necessity."}]}, {"question": "What does the standard deviation tell you", "positive_ctxs": [{"text": "Standard deviation tells you how spread out the data is. It is a measure of how far each observed value is from the mean. In any distribution, about 95% of values will be within 2 standard deviations of the mean."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "SD (roulette, even-money bet) = 2b \u221anpq, where b = flat bet per round, n = number of rounds, p = 18/38, and q = 20/38.For example, after 10 rounds at 1 unit per round, the standard deviation will be 2 \u00d7 1 \u00d7 \u221a10 \u00d7 18/38 \u00d7 20/38 = 3.16 units. After 10 rounds, the expected loss will be 10 \u00d7 1 \u00d7 5.26% = 0.53. As you can see, standard deviation is many times the magnitude of the expected loss.The standard deviation for pai gow poker is the lowest out of all common casino games."}, {"text": "Often, we want some information about the precision of the mean we obtained. We can obtain this by determining the standard deviation of the sampled mean. Assuming statistical independence of the values in the sample, the standard deviation of the mean is related to the standard deviation of the distribution by:"}, {"text": "Often, we want some information about the precision of the mean we obtained. We can obtain this by determining the standard deviation of the sampled mean. Assuming statistical independence of the values in the sample, the standard deviation of the mean is related to the standard deviation of the distribution by:"}, {"text": "Precision is often determined as the standard deviation of the repeated measures of a given value, namely using the same method described above to assess measurement uncertainty. However, this method is correct only when the instrument is accurate. When it is inaccurate, the uncertainty is larger than the standard deviation of the repeated measures, and it appears evident that the uncertainty does not depend only on instrumental precision."}, {"text": "The standard deviation of a population or sample and the standard error of a statistic (e.g., of the sample mean) are quite different, but related. The sample mean's standard error is the standard deviation of the set of means that would be found by drawing an infinite number of repeated samples from the population and computing a mean for each sample. The mean's standard error turns out to equal the population standard deviation divided by the square root of the sample size, and is estimated by using the sample standard deviation divided by the square root of the sample size."}, {"text": "The standard deviation of a population or sample and the standard error of a statistic (e.g., of the sample mean) are quite different, but related. The sample mean's standard error is the standard deviation of the set of means that would be found by drawing an infinite number of repeated samples from the population and computing a mean for each sample. The mean's standard error turns out to equal the population standard deviation divided by the square root of the sample size, and is estimated by using the sample standard deviation divided by the square root of the sample size."}]}, {"question": "How do you calculate gain of common source amplifier", "positive_ctxs": [{"text": "0:0012:40Suggested clip \u00b7 82 secondsCommon Source Amplifiers - Gain Equation - YouTubeYouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If the loop gain (the product of the amplifier gain and the extent of the positive feedback) at any frequency is greater than one, then the amplifier will oscillate at that frequency (Barkhausen stability criterion). Such oscillations are sometimes called parasitic oscillations. An amplifier that is stable in one set of conditions can break into parasitic oscillation in another."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "However, the error amplifier is limited in its ability to gain small spikes at high frequencies. PSRR is expressed as follows:"}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Regenerative circuits were invented and patented in 1914 for the amplification and reception of very weak radio signals. Carefully controlled positive feedback around a single transistor amplifier can multiply its gain by 1,000 or more. Therefore, a signal can be amplified 20,000 or even 100,000 times in one stage, that would normally have a gain of only 20 to 50."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}]}, {"question": "How do I check if a time series is stationary in R", "positive_ctxs": [{"text": "Use Augmented Dickey-Fuller Test (adf test). A p-Value of less than 0.05 in adf. test() indicates that it is stationary."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "One way to make some time series stationary is to compute the differences between consecutive observations. This is known as differencing. Differencing can help stabilize the mean of a time series by removing changes in the level of a time series, and so eliminating trend and seasonality."}, {"text": "A time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data."}, {"text": "A time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data."}, {"text": "A time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data."}, {"text": "However, if performance is described by a single summary statistic, it is possible that the approach described by Politis and Romano as a stationary bootstrap will work. The statistic of the bootstrap needs to accept an interval of the time series and return the summary statistic on it. The call to the stationary bootstrap needs to specify an appropriate mean interval length."}, {"text": "However, if performance is described by a single summary statistic, it is possible that the approach described by Politis and Romano as a stationary bootstrap will work. The statistic of the bootstrap needs to accept an interval of the time series and return the summary statistic on it. The call to the stationary bootstrap needs to specify an appropriate mean interval length."}, {"text": "However, if performance is described by a single summary statistic, it is possible that the approach described by Politis and Romano as a stationary bootstrap will work. The statistic of the bootstrap needs to accept an interval of the time series and return the summary statistic on it. The call to the stationary bootstrap needs to specify an appropriate mean interval length."}]}, {"question": "How do I combine CNN and Lstm", "positive_ctxs": [{"text": "A CNN LSTM can be defined by adding CNN layers on the front end followed by LSTM layers with a Dense layer on the output. It is helpful to think of this architecture as defining two sub-models: the CNN Model for feature extraction and the LSTM Model for interpreting the features across time steps."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Rice, William R.; Gaines, Steven D. (June 1994). \"'Heads I win, tails you lose': testing directional alternative hypotheses in ecological and evolutionary research\". Directed tests combine the attributes of one-tailed and two-tailed tests."}, {"text": "I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs."}, {"text": "I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs."}, {"text": "How do neurons migrate to the proper position in the central and peripheral systems? We know from molecular biology that distinct parts of the nervous system release distinct chemical cues, from growth factors to hormones that modulate and influence the growth and development of functional connections between neurons."}, {"text": "Syntactic or structural ambiguities are frequently found in humor and advertising. One of the most enduring jokes from the famous comedian Groucho Marx was his quip that used a modifier attachment ambiguity: \"I shot an elephant in my pajamas. How he got into my pajamas I don't know.\""}, {"text": "On the other hand, I have often screamed at cadets for bad execution, and in general they do better the next time. So please don't tell us that reinforcement works and punishment does not, because the opposite is the case.\" This was a joyous moment, in which I understood an important truth about the world: because we tend to reward others when they do well and punish them when they do badly, and because there is regression to the mean, it is part of the human condition that we are statistically punished for rewarding others and rewarded for punishing them."}]}, {"question": "What is the difference between trend time series and cross section analysis", "positive_ctxs": [{"text": "Subject 2. Time-series data is a set of observations collected at usually discrete and equally spaced time intervals. Cross-sectional data are observations that come from different individuals or groups at a single point in time."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "One way to make some time series stationary is to compute the differences between consecutive observations. This is known as differencing. Differencing can help stabilize the mean of a time series by removing changes in the level of a time series, and so eliminating trend and seasonality."}, {"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}, {"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}, {"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}, {"text": "In some disciplines, the RMSD is used to compare differences between two things that may vary, neither of which is accepted as the \"standard\". For example, when measuring the average difference between two time series"}, {"text": "In some disciplines, the RMSD is used to compare differences between two things that may vary, neither of which is accepted as the \"standard\". For example, when measuring the average difference between two time series"}, {"text": "where the median is found by, for example, sorting the values inside the brackets and finding the value in the middle. For larger values of n, the median can be efficiently computed by updating an indexable skiplist.Statistically, the moving average is optimal for recovering the underlying trend of the time series when the fluctuations about the trend are normally distributed. However, the normal distribution does not place high probability on very large deviations from the trend which explains why such deviations will have a disproportionately large effect on the trend estimate."}]}, {"question": "What are the two stages of processing in the feature integration theory", "positive_ctxs": [{"text": "The pre-attention phase is an automatic process which happens unconsciously. The second stage is focused attention in which an individual takes all of the observed features and combines them to make a complete perception. This second stage process occurs if the object doesn't stand out immediately."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "According to Treisman, the first stage of the feature integration theory is the preattentive stage. During this stage, different parts of the brain automatically gather information about basic features (colors, shape, movement) that are found in the visual field. The idea that features are automatically separated appears counterintuitive."}, {"text": "Information acquired through both bottom-up and top-down processing is ranked according to priority. The priority ranking guides visual search and makes the search more efficient. Whether the Guided Search Model 2.0 or the feature integration theory are \"correct\" theories of visual search is still a hotly debated topic."}, {"text": "The second stage of feature integration theory is the focused attention stage, where a subject combines individual features of an object to perceive the whole object. Combining individual features of an object requires attention, and selecting that object occurs within a \"master map\" of locations. The master map of locations contains all the locations in which features have been detected, with each location in the master map having access to the multiple feature maps."}, {"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}, {"text": "The idea of local feature integration is found in several other models, such as the Convolutional Neural Network model, the SIFT method, and the HoG method."}, {"text": "forms of efficient spatial coding, color coding, temporal/motion coding, stereo coding, and combinations of them.Further along the visual pathway, even the efficiently coded visual information is too much for the capacity of the information bottleneck, the visual attentional bottleneck. A subsequent theory has been developed on exogenous attentional selection of visual input information for further processing guided by a bottom-up saliency map in the primary visual cortex.Current research in sensory processing is divided among a biophysical modelling of different subsystems and a more theoretical modelling of perception. Current models of perception have suggested that the brain performs some form of Bayesian inference and integration of different sensory information in generating our perception of the physical world."}]}, {"question": "What is resampling in machine learning", "positive_ctxs": [{"text": "Data is the currency of applied machine learning. Resampling is a methodology of economically using a data sample to improve the accuracy and quantify the uncertainty of a population parameter. Resampling methods, in fact, make use of a nested resampling method."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The block bootstrap is used when the data, or the errors in a model, are correlated. In this case, a simple case or residual resampling will fail, as it is not able to replicate the correlation in the data. The block bootstrap tries to replicate the correlation by resampling inside blocks of data."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "What is data binning in statistics", "positive_ctxs": [{"text": "Statistical data binning is a way to group numbers of more or less continuous values into a smaller number of \"bins\". For example, if you have data about a group of people, you might want to arrange their ages into a smaller number of age intervals (for example, grouping every five years together)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Statistical data binning is a way to group numbers of more or less continuous values into a smaller number of \"bins\". For example, if you have data about a group of people, you might want to arrange their ages into a smaller number of age intervals (for example, grouping every five years together). It can also be used in multivariate statistics, binning in several dimensions at once."}, {"text": "Data binning (also called Discrete binning or bucketing) is a data pre-processing technique used to reduce the effects of minor observation errors. The original data values which fall into a given small interval, a bin, are replaced by a value representative of that interval, often the central value. It is a form of quantization."}]}, {"question": "Is Q learning temporal difference", "positive_ctxs": [{"text": "Temporal Difference is an approach to learning how to predict a quantity that depends on future values of a given signal. It can be used to learn both the V-function and the Q-function, whereas Q-learning is a specific TD algorithm used to learn the Q-function."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Another class of model-free deep reinforcement learning algorithms rely on dynamic programming, inspired by temporal difference learning and Q-learning. In discrete action spaces, these algorithms usually learn a neural network Q-function"}, {"text": "TD-Lambda is a learning algorithm invented by Richard S. Sutton based on earlier work on temporal difference learning by Arthur Samuel. This algorithm was famously applied by Gerald Tesauro to create TD-Gammon, a program that learned to play the game of backgammon at the level of expert human players.The lambda ("}, {"text": "TD-Lambda is a learning algorithm invented by Richard S. Sutton based on earlier work on temporal difference learning by Arthur Samuel. This algorithm was famously applied by Gerald Tesauro to create TD-Gammon, a program that learned to play the game of backgammon at the level of expert human players.The lambda ("}, {"text": "As of December 2011, ISO/IEC 9075, Database Language SQL:2011 Part 2: SQL/Foundation included clauses in table definitions to define \"application-time period tables\" (valid time tables), \"system-versioned tables\" (transaction time tables) and \"system-versioned application-time period tables\" (bitemporal tables). A substantive difference between the TSQL2 proposal and what was adopted in SQL:2011 is that there are no hidden columns in the SQL:2011 treatment, nor does it have a new data type for intervals; instead two date or timestamp columns can be bound together using a PERIOD FOR declaration. Another difference is replacement of the controversial (prefix) statement modifiers from TSQL2 with a set of temporal predicates.Other features of SQL:2011 standard related to temporal databases are automatic time period splitting, temporal primary keys, temporal referential integrity, temporal predicates with Allen's interval algebra and time-sliced and sequenced queries."}, {"text": "Batch methods, such as the least-squares temporal difference method, may use the information in the samples better, while incremental methods are the only choice when batch methods are infeasible due to their high computational or memory complexity. Some methods try to combine the two approaches. Methods based on temporal differences also overcome the fourth issue."}, {"text": "Batch methods, such as the least-squares temporal difference method, may use the information in the samples better, while incremental methods are the only choice when batch methods are infeasible due to their high computational or memory complexity. Some methods try to combine the two approaches. Methods based on temporal differences also overcome the fourth issue."}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}]}, {"question": "How data engineering is different from data science", "positive_ctxs": [{"text": "The main difference is the one of focus. Data Engineers are focused on building infrastructure and architecture for data generation. In contrast, data scientists are focused on advanced mathematics and statistical analysis on that generated data. Simply put, data scientists depend on data engineers."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Many statisticians, including Nate Silver, have argued that data science is not a new field, but rather another name for statistics. Others argue that data science is distinct from statistics because it focuses on problems and techniques unique to digital data. Vasant Dhar writes that statistics emphasizes quantitative data and description."}, {"text": "Many statisticians, including Nate Silver, have argued that data science is not a new field, but rather another name for statistics. Others argue that data science is distinct from statistics because it focuses on problems and techniques unique to digital data. Vasant Dhar writes that statistics emphasizes quantitative data and description."}, {"text": "Many statisticians, including Nate Silver, have argued that data science is not a new field, but rather another name for statistics. Others argue that data science is distinct from statistics because it focuses on problems and techniques unique to digital data. Vasant Dhar writes that statistics emphasizes quantitative data and description."}, {"text": "Many statisticians, including Nate Silver, have argued that data science is not a new field, but rather another name for statistics. Others argue that data science is distinct from statistics because it focuses on problems and techniques unique to digital data. Vasant Dhar writes that statistics emphasizes quantitative data and description."}, {"text": "Feature engineering is the process of using domain knowledge to extract features from raw data via data mining techniques. These features can be used to improve the performance of machine learning algorithms. Feature engineering can be considered as applied machine learning itself."}, {"text": "Stanford professor David Donoho writes that data science is not distinguished from statistics by the size of datasets or use of computing, and that many graduate programs misleadingly advertise their analytics and statistics training as the essence of a data science program. He describes data science as an applied field growing out of traditional statistics."}, {"text": "Stanford professor David Donoho writes that data science is not distinguished from statistics by the size of datasets or use of computing, and that many graduate programs misleadingly advertise their analytics and statistics training as the essence of a data science program. He describes data science as an applied field growing out of traditional statistics."}]}, {"question": "What does product moment correlation coefficient mean", "positive_ctxs": [{"text": "The product moment correlation coefficient (pmcc) can be used to tell us how strong the correlation between two variables is. A positive value indicates a positive correlation and the higher the value, the stronger the correlation. If there is a perfect negative correlation, then r = -1."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Unlike correlation coefficients, such as the product moment correlation coefficient, mutual information contains information about all dependence\u2014linear and nonlinear\u2014and not just linear dependence as the correlation coefficient measures. However, in the narrow case that the joint distribution for"}, {"text": "Pearson's correlation coefficient is the covariance of the two variables divided by the product of their standard deviations. The form of the definition involves a \"product moment\", that is, the mean (the first moment about the origin) of the product of the mean-adjusted random variables; hence the modifier product-moment in the name."}, {"text": "Pearson's correlation coefficient is the covariance of the two variables divided by the product of their standard deviations. The form of the definition involves a \"product moment\", that is, the mean (the first moment about the origin) of the product of the mean-adjusted random variables; hence the modifier product-moment in the name."}, {"text": "The Pearson correlation coefficient indicates the strength of a linear relationship between two variables, but its value generally does not completely characterize their relationship. In particular, if the conditional mean of"}, {"text": "The Pearson correlation coefficient indicates the strength of a linear relationship between two variables, but its value generally does not completely characterize their relationship. In particular, if the conditional mean of"}, {"text": "This is equal to the formula given above. As a correlation coefficient, the Matthews correlation coefficient is the geometric mean of the regression coefficients of the problem and its dual. The component regression coefficients of the Matthews correlation coefficient are Markedness (\u0394p) and Youden's J statistic (Informedness or \u0394p')."}, {"text": "However, Informedness and Markedness are Kappa-like renormalizations of Recall and Precision, and their geometric mean Matthews correlation coefficient thus acts like a debiased F-measure."}]}, {"question": "What is standard deviation and variance", "positive_ctxs": [{"text": "Key Takeaways. Standard deviation looks at how spread out a group of numbers is from the mean, by looking at the square root of the variance. The variance measures the average degree to which each point differs from the mean\u2014the average of all data points."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The standard deviation and the expected absolute deviation can both be used as an indicator of the \"spread\" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization covariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly heavy-tailed distribution."}, {"text": "The standard deviation and the expected absolute deviation can both be used as an indicator of the \"spread\" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization covariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly heavy-tailed distribution."}, {"text": "The standard deviation and the expected absolute deviation can both be used as an indicator of the \"spread\" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization covariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly heavy-tailed distribution."}, {"text": "The 2-norm and \u221e-norm are strictly convex, and thus (by convex optimization) the minimizer is unique (if it exists), and exists for bounded distributions. Thus standard deviation about the mean is lower than standard deviation about any other point, and the maximum deviation about the midrange is lower than the maximum deviation about any other point."}, {"text": "The 2-norm and \u221e-norm are strictly convex, and thus (by convex optimization) the minimizer is unique (if it exists), and exists for bounded distributions. Thus standard deviation about the mean is lower than standard deviation about any other point, and the maximum deviation about the midrange is lower than the maximum deviation about any other point."}, {"text": "The 2-norm and \u221e-norm are strictly convex, and thus (by convex optimization) the minimizer is unique (if it exists), and exists for bounded distributions. Thus standard deviation about the mean is lower than standard deviation about any other point, and the maximum deviation about the midrange is lower than the maximum deviation about any other point."}, {"text": "These are the critical values of the normal distribution with right tail probability. However, t-values are used when the sample size is below 30 and the standard deviation is unknown.When the variance is unknown, we must use a different estimator:"}]}, {"question": "What is Perceptron learning algorithm", "positive_ctxs": [{"text": "In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. A binary classifier is a function which can decide whether or not an input, represented by a vector of numbers, belongs to some specific class."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}]}, {"question": "Is F distribution a normal distribution", "positive_ctxs": [{"text": "What is the F-distribution. A probability distribution, like the normal distribution, is means of determining the probability of a set of events occurring. This is true for the F-distribution as well. The F-distribution is a skewed distribution of probabilities similar to a chi-squared distribution."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The subscript 1 indicates that this particular chi-square distribution is constructed from only 1 standard normal distribution. A chi-square distribution constructed by squaring a single standard normal distribution is said to have 1 degree of freedom. Thus, as the sample size for a hypothesis test increases, the distribution of the test statistic approaches a normal distribution."}, {"text": "Because the test statistic (such as t) is asymptotically normally distributed, provided the sample size is sufficiently large, the distribution used for hypothesis testing may be approximated by a normal distribution. Testing hypotheses using a normal distribution is well understood and relatively easy. The simplest chi-square distribution is the square of a standard normal distribution."}, {"text": "The normal distribution is the only distribution whose cumulants beyond the first two (i.e., other than the mean and variance) are zero. It is also the continuous distribution with the maximum entropy for a specified mean and variance. Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.The normal distribution is a subclass of the elliptical distributions."}, {"text": "The normal distribution is the only distribution whose cumulants beyond the first two (i.e., other than the mean and variance) are zero. It is also the continuous distribution with the maximum entropy for a specified mean and variance. Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.The normal distribution is a subclass of the elliptical distributions."}, {"text": "The normal distribution is the only distribution whose cumulants beyond the first two (i.e., other than the mean and variance) are zero. It is also the continuous distribution with the maximum entropy for a specified mean and variance. Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.The normal distribution is a subclass of the elliptical distributions."}, {"text": "The normal distribution is the only distribution whose cumulants beyond the first two (i.e., other than the mean and variance) are zero. It is also the continuous distribution with the maximum entropy for a specified mean and variance. Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.The normal distribution is a subclass of the elliptical distributions."}, {"text": "The normal distribution is the only distribution whose cumulants beyond the first two (i.e., other than the mean and variance) are zero. It is also the continuous distribution with the maximum entropy for a specified mean and variance. Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.The normal distribution is a subclass of the elliptical distributions."}]}, {"question": "Is convex optimization important for machine learning", "positive_ctxs": [{"text": "6 Answers. Machine learning algorithms use optimization all the time. Nonetheless, as mentioned in other answers, convex optimization is faster, simpler and less computationally intensive, so it is often easier to \"convexify\" a problem (make it convex optimization friendly), then use non-convex optimization."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "There have been numerous developments within the past decade in convex optimization techniques which have influenced the application of proximal gradient methods in statistical learning theory. Here we survey a few important topics which can greatly improve practical algorithmic performance of these methods."}, {"text": "Extensions of convex optimization include the optimization of biconvex, pseudo-convex, and quasiconvex functions. Extensions of the theory of convex analysis and iterative methods for approximately solving non-convex minimization problems occur in the field of generalized convexity, also known as abstract convex analysis."}, {"text": "One of the widely used convex optimization algorithms is projections onto convex sets (POCS). This algorithm is employed to recover/synthesize a signal satisfying simultaneously several convex constraints."}, {"text": "Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.Convex optimization has applications in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design, data analysis and modeling, finance, statistics (optimal experimental design), and structural optimization, where the approximation concept has proven to be efficient."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}]}, {"question": "What is weak classifier in AdaBoost", "positive_ctxs": [{"text": "A weak classifier is simply a classifier that performs poorly, but performs better than random guessing. AdaBoost can be applied to any classification algorithm, so it's really a technique that builds on top of other classifiers as opposed to being a classifier itself."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Pruning is the process of removing poorly performing weak classifiers to improve memory and execution-time cost of the boosted classifier. The simplest methods, which can be particularly effective in conjunction with totally corrective training, are weight- or margin-trimming: when the coefficient, or the contribution to the total test error, of some weak classifier falls below a certain threshold, that classifier is dropped. Margineantu & Dietterich suggest an alternative criterion for trimming: weak classifiers should be selected such that the diversity of the ensemble is maximized."}, {"text": "Given images containing various known objects in the world, a classifier can be learned from them to automatically classify the objects in future images. Simple classifiers built based on some image feature of the object tend to be weak in categorization performance. Using boosting methods for object categorization is a way to unify the weak classifiers in a special way to boost the overall ability of categorization."}, {"text": "Given images containing various known objects in the world, a classifier can be learned from them to automatically classify the objects in future images. Simple classifiers built based on some image feature of the object tend to be weak in categorization performance. Using boosting methods for object categorization is a way to unify the weak classifiers in a special way to boost the overall ability of categorization."}, {"text": "Given images containing various known objects in the world, a classifier can be learned from them to automatically classify the objects in future images. Simple classifiers built based on some image feature of the object tend to be weak in categorization performance. Using boosting methods for object categorization is a way to unify the weak classifiers in a special way to boost the overall ability of categorization."}, {"text": "Given images containing various known objects in the world, a classifier can be learned from them to automatically classify the objects in future images. Simple classifiers built based on some image feature of the object tend to be weak in categorization performance. Using boosting methods for object categorization is a way to unify the weak classifiers in a special way to boost the overall ability of categorization."}, {"text": "Given images containing various known objects in the world, a classifier can be learned from them to automatically classify the objects in future images. Simple classifiers built based on some image feature of the object tend to be weak in categorization performance. Using boosting methods for object categorization is a way to unify the weak classifiers in a special way to boost the overall ability of categorization."}, {"text": "Given images containing various known objects in the world, a classifier can be learned from them to automatically classify the objects in future images. Simple classifiers built based on some image feature of the object tend to be weak in categorization performance. Using boosting methods for object categorization is a way to unify the weak classifiers in a special way to boost the overall ability of categorization."}]}, {"question": "What is K means clustering algorithm explain with an example", "positive_ctxs": [{"text": "K-means clustering algorithm computes the centroids and iterates until we it finds optimal centroid. In this algorithm, the data points are assigned to a cluster in such a manner that the sum of the squared distance between the data points and centroid would be minimum."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "What emerges then is that info-gap theory is yet to explain in what way, if any, it actually attempts to deal with the severity of the uncertainty under consideration. Subsequent sections of this article will address this severity issue and its methodological and practical implications."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "A simple agglomerative clustering algorithm is described in the single-linkage clustering page; it can easily be adapted to different types of linkage (see below)."}, {"text": "A simple agglomerative clustering algorithm is described in the single-linkage clustering page; it can easily be adapted to different types of linkage (see below)."}, {"text": "A simple agglomerative clustering algorithm is described in the single-linkage clustering page; it can easily be adapted to different types of linkage (see below)."}]}, {"question": "Is entropy quantized", "positive_ctxs": [{"text": "TL;DR: Entropy is not quantized. Entropy is often stated to be the logarithm of the number of Quantum States accessible to the system. Entropy is often stated to be the logarithm of the number of Quantum States accessible to the system."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Even though the quantized massless \u03c64 is not scale-invariant, there do exist scale-invariant quantized scalar field theories other than the Gaussian fixed point. One example is the Wilson-Fisher fixed point, below."}, {"text": "The concept of entropy can be described qualitatively as a measure of energy dispersal at a specific temperature. Similar terms have been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or \"spreading\" of the total energy of each constituent of a system over its particular quantized energy levels."}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "However, even though the classical massless \u03c64 theory is scale-invariant in D=4, the quantized version is not scale-invariant. We can see this from the beta-function for the coupling parameter, g."}, {"text": "The central bin is not divided in angular directions. The gradient orientations are quantized in 16 bins resulting in 272-bin histogram. The size of this descriptor is reduced with PCA."}, {"text": "Consequential \u2013 What are the potential risks if the scores are invalid or inappropriately interpreted? Is the test still worthwhile given the risks?"}]}, {"question": "Why do we need confusion matrix in data mining", "positive_ctxs": [{"text": "Confusion matrix not only gives you insight into the errors being made by your classifier but also types of errors that are being made. This breakdown helps you to overcomes the limitation of using classification accuracy alone. Every column of the confusion matrix represents the instances of that predicted class."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as \"unsupervised learning\" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge."}, {"text": "Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as \"unsupervised learning\" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge."}, {"text": "Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as \"unsupervised learning\" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge."}, {"text": "Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as \"unsupervised learning\" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge."}, {"text": "Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as \"unsupervised learning\" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge."}, {"text": "Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as \"unsupervised learning\" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge."}, {"text": "Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as \"unsupervised learning\" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge."}]}, {"question": "What is regression analysis example", "positive_ctxs": [{"text": "A simple linear regression plot for amount of rainfall. Regression analysis is used in stats to find trends in data. For example, you might guess that there's a connection between how much you eat and how much you weigh; regression analysis can help you quantify that."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Until a more analytical solution to MAUP is discovered, spatial sensitivity analysis using a variety of areal units is recommended as a methodology to estimate the uncertainty of correlation and regression coefficients due to ecological bias. An example of data simulation and re-aggregation using the ArcPy library is available.In transport planning, MAUP is associated to Traffic Analysis Zoning (TAZ). A major point of departure in understanding problems in transportation analysis is the recognition that spatial analysis has some limitations associated with the discretization of space."}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "What is more there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.A simple example of modus ponens often used in introductory logic books is \"If you are human then you are mortal\". This can be represented in pseudocode as:"}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}]}, {"question": "Where is coefficient variation used", "positive_ctxs": [{"text": "The coefficient of variation (COV) is a measure of relative event dispersion that's equal to the ratio between the standard deviation and the mean. While it is most commonly used to compare relative risk, the COV may be applied to any type of quantitative likelihood or probability distribution."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "When normalizing by the mean value of the measurements, the term coefficient of variation of the RMSD, CV(RMSD) may be used to avoid ambiguity. This is analogous to the coefficient of variation with the RMSD taking the place of the standard deviation."}, {"text": "When normalizing by the mean value of the measurements, the term coefficient of variation of the RMSD, CV(RMSD) may be used to avoid ambiguity. This is analogous to the coefficient of variation with the RMSD taking the place of the standard deviation."}, {"text": "When the failure rate is decreasing the coefficient of variation is \u2a7e 1, and when the failure rate is increasing the coefficient of variation is \u2a7d 1. Note that this result only holds when the failure rate is defined for all t \u2a7e 0 and that the converse result (coefficient of variation determining nature of failure rate) does not hold."}, {"text": "A data set of [100, 100, 100] has constant values. Its standard deviation is 0 and average is 100, giving the coefficient of variation as"}, {"text": "0 / 100 = 0A data set of [90, 100, 110] has more variability. Its sample standard deviation is 10 and its average is 100, giving the coefficient of variation as"}, {"text": "The coefficient of variation is also common in applied probability fields such as renewal theory, queueing theory, and reliability theory. In these fields, the exponential distribution is often more important than the normal distribution."}, {"text": "The coefficient of partial determination can be defined as the proportion of variation that cannot be explained in a reduced model, but can be explained by the predictors specified in a full(er) model. This coefficient is used to provide insight into whether or not one or more additional predictors may be useful in a more fully specified regression model."}]}, {"question": "How do you read cluster analysis", "positive_ctxs": [{"text": "Cluster analysis divides data into groups (clusters) that are meaningful, useful, or both. If meaningful groups are the goal, then the clusters should capture the natural structure of the data. In some cases, however, cluster analysis is only a useful starting point for other purposes, such as data summarization."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Instead of responding to feedback, cluster analysis identifies commonalities in the data and reacts based on the presence or absence of such commonalities in each new piece of data. This approach helps detect anomalous data points that do not fit into either group."}, {"text": "Instead of responding to feedback, cluster analysis identifies commonalities in the data and reacts based on the presence or absence of such commonalities in each new piece of data. This approach helps detect anomalous data points that do not fit into either group."}, {"text": "Instead of responding to feedback, cluster analysis identifies commonalities in the data and reacts based on the presence or absence of such commonalities in each new piece of data. This approach helps detect anomalous data points that do not fit into either group."}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}]}, {"question": "What test statistic is used to test a population proportion", "positive_ctxs": [{"text": "Test method. Use the one-sample z-test to determine whether the hypothesized population proportion differs significantly from the observed sample proportion."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If the false positive rate of the test is higher than the proportion of the new population with the condition, then a test administrator whose experience has been drawn from testing in a high-prevalence population may conclude from experience that a positive test result usually indicates a positive subject, when in fact a false positive is far more likely to have occurred."}, {"text": "The test statistic should follow a normal distribution. Generally, one appeals to the central limit theorem to justify assuming that a test statistic varies normally. There is a great deal of statistical research on the question of when a test statistic varies approximately normally."}, {"text": "A test statistic is a statistic (a quantity derived from the sample) used in statistical hypothesis testing. A hypothesis test is typically specified in terms of a test statistic, considered as a numerical summary of a data-set that reduces the data to one value that can be used to perform the hypothesis test. In general, a test statistic is selected or defined in such a way as to quantify, within observed data, behaviours that would distinguish the null from the alternative hypothesis, where such an alternative is prescribed, or that would characterize the null hypothesis if there is no explicitly stated alternative hypothesis."}, {"text": "An important property of a test statistic is that its sampling distribution under the null hypothesis must be calculable, either exactly or approximately, which allows p-values to be calculated. A test statistic shares some of the same qualities of a descriptive statistic, and many statistics can be used as both test statistics and descriptive statistics. However, a test statistic is specifically intended for use in statistical testing, whereas the main quality of a descriptive statistic is that it is easily interpretable."}, {"text": "positive predictive value (PPV, aka precision) (TP/(TP+FP)). These are the proportion of the population with a given test result for which the test is correct."}, {"text": "A chi-squared test, also written as \u03c72 test, is a statistical hypothesis test that is valid to perform when the test statistic is chi-squared distributed under the null hypothesis, specifically Pearson's chi-squared test and variants thereof. Pearson's chi-squared test is used to determine whether there is a statistically significant difference between the expected frequencies and the observed frequencies in one or more categories of a contingency table."}, {"text": "A chi-squared test, also written as \u03c72 test, is a statistical hypothesis test that is valid to perform when the test statistic is chi-squared distributed under the null hypothesis, specifically Pearson's chi-squared test and variants thereof. Pearson's chi-squared test is used to determine whether there is a statistically significant difference between the expected frequencies and the observed frequencies in one or more categories of a contingency table."}]}, {"question": "What is statistical significance in AB testing", "positive_ctxs": [{"text": "In the context of AB testing experiments, statistical significance is how likely it is that the difference between your experiment's control version and test version isn't due to error or random chance. It's commonly used in business to observe how your experiments affect your business's conversion rates."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Rigidly requiring statistical significance as a criterion for publication, resulting in publication bias. Most of the criticism is indirect. Rather than being wrong, statistical hypothesis testing is misunderstood, overused and misused."}, {"text": "Rigidly requiring statistical significance as a criterion for publication, resulting in publication bias. Most of the criticism is indirect. Rather than being wrong, statistical hypothesis testing is misunderstood, overused and misused."}, {"text": "Rigidly requiring statistical significance as a criterion for publication, resulting in publication bias. Most of the criticism is indirect. Rather than being wrong, statistical hypothesis testing is misunderstood, overused and misused."}, {"text": "Rigidly requiring statistical significance as a criterion for publication, resulting in publication bias. Most of the criticism is indirect. Rather than being wrong, statistical hypothesis testing is misunderstood, overused and misused."}, {"text": "Rigidly requiring statistical significance as a criterion for publication, resulting in publication bias. Most of the criticism is indirect. Rather than being wrong, statistical hypothesis testing is misunderstood, overused and misused."}, {"text": "Rigidly requiring statistical significance as a criterion for publication, resulting in publication bias. Most of the criticism is indirect. Rather than being wrong, statistical hypothesis testing is misunderstood, overused and misused."}, {"text": "Rigidly requiring statistical significance as a criterion for publication, resulting in publication bias. Most of the criticism is indirect. Rather than being wrong, statistical hypothesis testing is misunderstood, overused and misused."}]}, {"question": "What is sparsity in document term matrix", "positive_ctxs": [{"text": "r text-mining natural-language. According the documentation of the removeSparseTerms function from the tm package, this is what sparsity entails: A term-document matrix where those terms from x are removed which have at least a sparse percentage of empty (i.e., terms occurring 0 times in a document) elements."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In the formula, A is the supplied m by n weighted matrix of term frequencies in a collection of text where m is the number of unique terms, and n is the number of documents. T is a computed m by r matrix of term vectors where r is the rank of A\u2014a measure of its unique dimensions \u2264 min(m,n). S is a computed r by r diagonal matrix of decreasing singular values, and D is a computed n by r matrix of document vectors."}, {"text": "In the formula, A is the supplied m by n weighted matrix of term frequencies in a collection of text where m is the number of unique terms, and n is the number of documents. T is a computed m by r matrix of term vectors where r is the rank of A\u2014a measure of its unique dimensions \u2264 min(m,n). S is a computed r by r diagonal matrix of decreasing singular values, and D is a computed n by r matrix of document vectors."}, {"text": "In the formula, A is the supplied m by n weighted matrix of term frequencies in a collection of text where m is the number of unique terms, and n is the number of documents. T is a computed m by r matrix of term vectors where r is the rank of A\u2014a measure of its unique dimensions \u2264 min(m,n). S is a computed r by r diagonal matrix of decreasing singular values, and D is a computed n by r matrix of document vectors."}, {"text": "When creating a data-set of terms that appear in a corpus of documents, the document-term matrix contains rows corresponding to the documents and columns corresponding to the terms. Each ij cell, then, is the number of times word j occurs in document i. As such, each row is a vector of term counts that represents the content of the document corresponding to that row."}, {"text": "A rank-reduced, singular value decomposition is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI. It computes the term and document vector spaces by approximating the single term-frequency matrix,"}, {"text": "A rank-reduced, singular value decomposition is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI. It computes the term and document vector spaces by approximating the single term-frequency matrix,"}, {"text": "A rank-reduced, singular value decomposition is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI. It computes the term and document vector spaces by approximating the single term-frequency matrix,"}]}, {"question": "What is ground truth box", "positive_ctxs": [{"text": "The ground-truth bounding boxes (i.e., the hand labeled bounding boxes from the testing set that specify where in the image our object is)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In slang, the coordinates indicate where we think George Washington's nose is located, and the ground truth is where it really is. In practice a smart phone or hand-held GPS unit is routinely able to estimate the ground truth within 6\u201310 meters. Specialized instruments can reduce GPS measurement error to under a centimeter."}, {"text": "Bayesian spam filtering is a common example of supervised learning. In this system, the algorithm is manually taught the differences between spam and non-spam. This depends on the ground truth of the messages used to train the algorithm \u2013 inaccuracies in the ground truth will correlate to inaccuracies in the resulting spam/non-spam verdicts."}, {"text": "While the gold standard is a best effort to obtain the truth, ground truth is typically collected by direct observations.In machine learning and information retrieval, \"ground truth\" is the preferred term even when classifications may be imperfect; the gold standard is assumed to be the ground truth."}, {"text": "is the number of pairs of points that are clustered together in the predicted partition but not in the ground truth partition etc. If the dataset is of size N, then"}, {"text": "is the number of pairs of points that are clustered together in the predicted partition but not in the ground truth partition etc. If the dataset is of size N, then"}, {"text": "The ground truth being estimated by those coordinates is the tip of George Washington's nose on Mount Rushmore. The accuracy of the estimate is the maximum distance between the location coordinates and the ground truth. We could say in this case that the estimate accuracy is 10 meters, meaning that the point on earth represented by the location coordinates is thought to be within 10 meters of George's nose\u2014the ground truth."}, {"text": "In remote sensing, \"ground truth\" refers to information collected on location. Ground truth allows image data to be related to real features and materials on the ground. The collection of ground truth data enables calibration of remote-sensing data, and aids in the interpretation and analysis of what is being sensed."}]}, {"question": "What is sentiment analysis in natural language processing", "positive_ctxs": [{"text": "Sentiment analysis (also known as opinion mining or emotion AI) refers to the use of natural language processing, text analysis, computational linguistics, and biometrics to systematically identify, extract, quantify, and study affective states and subjective information."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What kind of graph is used depends on the application. For example, in natural language processing, linear chain CRFs are popular, which implement sequential dependencies in the predictions. In image processing the graph typically connects locations to nearby and/or similar locations to enforce that they receive similar predictions."}, {"text": "Concept shift (same features, different labels): local nodes may share the same features but some of them correspond to different labels at different local nodes. For example, in natural language processing, the sentiment analysis may yield different sentiments even if the same text is observed."}, {"text": "Concept shift (same features, different labels): local nodes may share the same features but some of them correspond to different labels at different local nodes. For example, in natural language processing, the sentiment analysis may yield different sentiments even if the same text is observed."}, {"text": "Trigrams are a special case of the n-gram, where n is 3. They are often used in natural language processing for performing statistical analysis of texts and in cryptography for control and use of ciphers and codes."}, {"text": "Grammatical dependency relations are obtained by deep parsing of the text. Hybrid approaches leverage both machine learning and elements from knowledge representation such as ontologies and semantic networks in order to detect semantics that are expressed in a subtle manner, e.g., through the analysis of concepts that do not explicitly convey relevant information, but which are implicitly linked to other concepts that do so.Open source software tools as well as range of free and paid sentiment analysis tools deploy machine learning, statistics, and natural language processing techniques to automate sentiment analysis on large collections of texts, including web pages, online news, internet discussion groups, online reviews, web blogs, and social media. Knowledge-based systems, on the other hand, make use of publicly available resources, to extract the semantic and affective information associated with natural language concepts."}, {"text": "An issue when using n-gram language models are out-of-vocabulary (OOV) words. They are encountered in computational linguistics and natural language processing when the input includes words which were not present in a system's dictionary or database during its preparation. By default, when a language model is estimated, the entire observed vocabulary is used."}, {"text": "An issue when using n-gram language models are out-of-vocabulary (OOV) words. They are encountered in computational linguistics and natural language processing when the input includes words which were not present in a system's dictionary or database during its preparation. By default, when a language model is estimated, the entire observed vocabulary is used."}]}, {"question": "What is the main purpose of random sampling", "positive_ctxs": [{"text": "Simply put, a random sample is a subset of individuals randomly selected by researchers to represent an entire group as a whole. The goal is to get a sample of people that is representative of the larger population."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Importance sampling provides a very important tool to perform Monte-Carlo integration. The main result of importance sampling to this method is that the uniform sampling of"}, {"text": "The main idea behind this method is that the results are computed based on repeated random sampling and statistical analysis. The Monte Carlo simulation is, in fact, random experimentations, in the case that, the results of these experiments are not well known."}, {"text": "The main idea behind this method is that the results are computed based on repeated random sampling and statistical analysis. The Monte Carlo simulation is, in fact, random experimentations, in the case that, the results of these experiments are not well known."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}]}, {"question": "What is the shape of the chi square distribution", "positive_ctxs": [{"text": "Chi Square distributions are positively skewed, with the degree of skew decreasing with increasing degrees of freedom. As the degrees of freedom increases, the Chi Square distribution approaches a normal distribution."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "A random variable that is a function of the random sample and of the unknown parameter, but whose probability distribution does not depend on the unknown parameter is called a pivotal quantity or pivot. Widely used pivots include the z-score, the chi square statistic and Student's t-value."}, {"text": "A random variable that is a function of the random sample and of the unknown parameter, but whose probability distribution does not depend on the unknown parameter is called a pivotal quantity or pivot. Widely used pivots include the z-score, the chi square statistic and Student's t-value."}]}, {"question": "Is Softmax the same as logistic regression", "positive_ctxs": [{"text": "Softmax regression (or multinomial logistic regression) is a generalization of logistic regression to the case where we want to handle multiple classes. In logistic regression we assumed that the labels were binary: y(i)\u2208{0,1} . We used such a classifier to distinguish between two kinds of hand-written digits."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The logistic loss is sometimes called cross-entropy loss. It is also known as log loss (In this case, the binary label is often denoted by {-1,+1}).Remark: The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared error loss for Linear regression."}, {"text": "The logistic loss is sometimes called cross-entropy loss. It is also known as log loss (In this case, the binary label is often denoted by {-1,+1}).Remark: The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared error loss for Linear regression."}, {"text": "The basic setup is the same as in logistic regression, the only difference being that the dependent variables are categorical rather than binary, i.e. there are K possible outcomes rather than just two. The following description is somewhat shortened; for more details, consult the logistic regression article."}, {"text": "The basic setup is the same as in logistic regression, the only difference being that the dependent variables are categorical rather than binary, i.e. there are K possible outcomes rather than just two. The following description is somewhat shortened; for more details, consult the logistic regression article."}, {"text": "Logistic regression measures the relationship between the categorical dependent variable and one or more independent variables by estimating probabilities using a logistic function, which is the cumulative distribution function of logistic distribution. Thus, it treats the same set of problems as probit regression using similar techniques, with the latter using a cumulative normal distribution curve instead. Equivalently, in the latent variable interpretations of these two methods, the first assumes a standard logistic distribution of errors and the second a standard normal distribution of errors.Logistic regression can be seen as a special case of the generalized linear model and thus analogous to linear regression."}, {"text": "Logistic regression measures the relationship between the categorical dependent variable and one or more independent variables by estimating probabilities using a logistic function, which is the cumulative distribution function of logistic distribution. Thus, it treats the same set of problems as probit regression using similar techniques, with the latter using a cumulative normal distribution curve instead. Equivalently, in the latent variable interpretations of these two methods, the first assumes a standard logistic distribution of errors and the second a standard normal distribution of errors.Logistic regression can be seen as a special case of the generalized linear model and thus analogous to linear regression."}, {"text": "Logistic regression measures the relationship between the categorical dependent variable and one or more independent variables by estimating probabilities using a logistic function, which is the cumulative distribution function of logistic distribution. Thus, it treats the same set of problems as probit regression using similar techniques, with the latter using a cumulative normal distribution curve instead. Equivalently, in the latent variable interpretations of these two methods, the first assumes a standard logistic distribution of errors and the second a standard normal distribution of errors.Logistic regression can be seen as a special case of the generalized linear model and thus analogous to linear regression."}]}, {"question": "How do you define a vector space", "positive_ctxs": [{"text": "Definition: A vector space is a set V on which two operations + and \u00b7 are defined, called vector addition and scalar multiplication. The operation + (vector addition) must satisfy the following conditions: Closure: If u and v are any vectors in V, then the sum u + v belongs to V."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Roughly, affine spaces are vector spaces whose origins are not specified. More precisely, an affine space is a set with a free transitive vector space action. In particular, a vector space is an affine space over itself, by the map"}, {"text": "A vector bundle is a family of vector spaces parametrized continuously by a topological space X. More precisely, a vector bundle over X is a topological space E equipped with a continuous map"}, {"text": "There is a direct correspondence between n-by-n square matrices and linear transformations from an n-dimensional vector space into itself, given any basis of the vector space. Hence, in a finite-dimensional vector space, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices, or the language of linear transformations.If V is finite-dimensional, the above equation is equivalent to"}, {"text": "A linear subspace of dimension 2 is a vector plane. A linear subspace that contains all elements but one of a basis of the ambient space is a vector hyperplane. In a vector space of finite dimension n, a vector hyperplane is thus a subspace of dimension n \u2013 1."}, {"text": ", viewed as a vector space over itself. Equipped by pointwise addition and multiplication by a scalar, the linear forms form a vector space, called the dual space of"}]}, {"question": "What is the simple definition of statistics", "positive_ctxs": [{"text": "1 : a branch of mathematics dealing with the collection, analysis, interpretation, and presentation of masses of numerical data. 2 : a collection of quantitative data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Misuses of statistics can result from problems at any step in the process. The statistical standards ideally imposed on the scientific report are much different than those imposed on the popular press and advertisers; however, cases exist of advertising disguised as science. The definition of the misuse of statistics is weak on the required completeness of statistical reporting."}, {"text": "The term mean squared error is sometimes used to refer to the unbiased estimate of error variance: the residual sum of squares divided by the number of degrees of freedom. This definition for a known, computed quantity differs from the above definition for the computed MSE of a predictor, in that a different denominator is used. The denominator is the sample size reduced by the number of model parameters estimated from the same data, (n-p) for p regressors or (n-p-1) if an intercept is used (see errors and residuals in statistics for more details)."}, {"text": "The term mean squared error is sometimes used to refer to the unbiased estimate of error variance: the residual sum of squares divided by the number of degrees of freedom. This definition for a known, computed quantity differs from the above definition for the computed MSE of a predictor, in that a different denominator is used. The denominator is the sample size reduced by the number of model parameters estimated from the same data, (n-p) for p regressors or (n-p-1) if an intercept is used (see errors and residuals in statistics for more details)."}, {"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}, {"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}, {"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}]}, {"question": "How does Multicollinearity affect the regression model", "positive_ctxs": [{"text": "Multicollinearity causes the following two basic types of problems: The coefficient estimates can swing wildly based on which other independent variables are in the model. Multicollinearity reduces the precision of the estimate coefficients, which weakens the statistical power of your regression model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A group of 20 students spends between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability of the student passing the exam?"}, {"text": "A group of 20 students spends between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability of the student passing the exam?"}, {"text": "A group of 20 students spends between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability of the student passing the exam?"}, {"text": "In statistics, multicollinearity (also collinearity) is a phenomenon in which one predictor variable in a multiple regression model can be linearly predicted from the others with a substantial degree of accuracy. In this situation, the coefficient estimates of the multiple regression may change erratically in response to small changes in the model or the data. Multicollinearity does not reduce the predictive power or reliability of the model as a whole, at least within the sample data set; it only affects calculations regarding individual predictors."}, {"text": "Multicollinearity refers to a situation in which more than two explanatory variables in a multiple regression model are highly linearly related. We have perfect multicollinearity if, for example as in the equation above, the correlation"}, {"text": "Two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of the other (equivalently, does not affect the odds). Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other."}, {"text": "Two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of the other (equivalently, does not affect the odds). Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other."}]}, {"question": "What is the difference between AZ score and standard deviation", "positive_ctxs": [{"text": "Key Takeaways. Standard deviation defines the line along which a particular data point lies. Z-score indicates how much a given value differs from the standard deviation. The Z-score, or standard score, is the number of standard deviations a given data point lies above or below mean."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "-th feature is computed by averaging the difference in out-of-bag error before and after the permutation over all trees. The score is normalized by the standard deviation of these differences."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "The IQR, mean, and standard deviation of a population P can be used in a simple test of whether or not P is normally distributed, or Gaussian. If P is normally distributed, then the standard score of the first quartile, z1, is \u22120.67, and the standard score of the third quartile, z3, is +0.67. Given mean = X and standard deviation = \u03c3 for P, if P is normally distributed, the first quartile"}]}, {"question": "What is logit probit model", "positive_ctxs": [{"text": "The logit model uses something called the cumulative distribution function of the logistic distribution. The probit model uses something called the cumulative distribution function of the standard normal distribution to define f(\u2217). Both functions will take any number and rescale it to fall between 0 and 1."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Closely related to the logit function (and logit model) are the probit function and probit model. The logit and probit are both sigmoid functions with a domain between 0 and 1, which makes them both quantile functions \u2013 i.e., inverses of the cumulative distribution function (CDF) of a probability distribution. In fact, the logit is the quantile function of the logistic distribution, while the probit is the quantile function of the normal distribution."}, {"text": "Closely related to the logit function (and logit model) are the probit function and probit model. The logit and probit are both sigmoid functions with a domain between 0 and 1, which makes them both quantile functions \u2013 i.e., inverses of the cumulative distribution function (CDF) of a probability distribution. In fact, the logit is the quantile function of the logistic distribution, while the probit is the quantile function of the normal distribution."}, {"text": "Closely related to the logit function (and logit model) are the probit function and probit model. The logit and probit are both sigmoid functions with a domain between 0 and 1, which makes them both quantile functions \u2013 i.e., inverses of the cumulative distribution function (CDF) of a probability distribution. In fact, the logit is the quantile function of the logistic distribution, while the probit is the quantile function of the normal distribution."}, {"text": "The reason for the use of the probit model is that a constant scaling of the input variable to a normal CDF (which can be absorbed through equivalent scaling of all of the parameters) yields a function that is practically identical to the logit function, but probit models are more tractable in some situations than logit models. (In a Bayesian setting in which normally distributed prior distributions are placed on the parameters, the relationship between the normal priors and the normal CDF link function means that a probit model can be computed using Gibbs sampling, while a logit model generally cannot.)"}, {"text": "The reason for the use of the probit model is that a constant scaling of the input variable to a normal CDF (which can be absorbed through equivalent scaling of all of the parameters) yields a function that is practically identical to the logit function, but probit models are more tractable in some situations than logit models. (In a Bayesian setting in which normally distributed prior distributions are placed on the parameters, the relationship between the normal priors and the normal CDF link function means that a probit model can be computed using Gibbs sampling, while a logit model generally cannot.)"}, {"text": "Another model that was developed to offset the disadvantages of the LPM is the probit model. The probit model uses the same approach to non-linearity as does the logit model; however, it uses the normal CDF instead of the logistic CDF."}, {"text": "Another model that was developed to offset the disadvantages of the LPM is the probit model. The probit model uses the same approach to non-linearity as does the logit model; however, it uses the normal CDF instead of the logistic CDF."}]}, {"question": "What is probability sampling technique", "positive_ctxs": [{"text": "A probability sampling method is any method of sampling that utilizes some form of random selection. In order to have a random selection method, you must set up some process or procedure that assures that the different units in your population have equal probabilities of being chosen."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "A random sampling technique is then used on any relevant clusters to choose which clusters to include in the study. In single-stage cluster sampling, all the elements from each of the selected clusters are sampled. In two-stage cluster sampling, a random sampling technique is applied to the elements from each of the selected clusters."}, {"text": "A random sampling technique is then used on any relevant clusters to choose which clusters to include in the study. In single-stage cluster sampling, all the elements from each of the selected clusters are sampled. In two-stage cluster sampling, a random sampling technique is applied to the elements from each of the selected clusters."}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "Importance sampling is a variance reduction technique that can be used in the Monte Carlo method. The idea behind importance sampling is that certain values of the input random variables in a simulation have more impact on the parameter being estimated than others. If these \"important\" values are emphasized by sampling more frequently, then the estimator variance can be reduced."}, {"text": "The mathematical statement of this problem is as follows: pick a random permutation on n elements and k values from the range 1 to n, also at random, call these marks. What is the probability that there is at least one mark on every cycle of the permutation? The claim is this probability is k/n."}, {"text": "In statistics, importance sampling is a general technique for estimating properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. It is related to umbrella sampling in computational physics. Depending on the application, the term may refer to the process of sampling from this alternative distribution, the process of inference, or both."}]}, {"question": "What is mini batch stochastic gradient descent", "positive_ctxs": [{"text": "Mini-batch gradient descent is a variation of the gradient descent algorithm that splits the training dataset into small batches that are used to calculate model error and update model coefficients. Implementations may choose to sum the gradient over the mini-batch which further reduces the variance of the gradient."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}]}, {"question": "What is character N grams", "positive_ctxs": [{"text": "Character N-grams (of at least 3 characters) that are common to words meaning \u201ctransport\u201d in the same texts sample in French, Spanish and Greek and their respective frequency."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "An a priori probability is a probability that is derived purely by deductive reasoning. One way of deriving a priori probabilities is the principle of indifference, which has the character of saying that, if there are N mutually exclusive and collectively exhaustive events and if they are equally likely, then the probability of a given event occurring is 1/N. Similarly the probability of one of a given collection of K events is K / N."}, {"text": "The DualShock 3 can be identified by its \"DualShock 3\" and \"Sixaxis\" markings. It also weighs 192 grams (6.8 oz), 40% more than its predecessor, the Sixaxis, which weighed only 137.1 grams (4.84 oz)."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Proposition: The Cartesian product of two countable sets A and B is countable.Proof: Observe that N \u00d7 N is countable as a consequence of the definition because the function f : N \u00d7 N \u2192 N given by f(m, n) = 2m3n is injective. It then follows from the Basic Theorem and the Corollary that the Cartesian product of any two countable sets is countable. This follows because if A and B are countable there are surjections f : N \u2192 A and g : N \u2192 B."}, {"text": "given by G(n, m) = gn(m) is a surjection. Since N \u00d7 N is countable, the Corollary implies that the union is countable. We use the axiom of countable choice in this proof to pick for each n in N a surjection gn from the non-empty collection of surjections from N to An."}, {"text": "If we take another sample of 25 cups, we could easily expect to find mean values like 250.4 or 251.1 grams. A sample mean value of 280 grams however would be extremely rare if the mean content of the cups is in fact close to 250 grams. There is a whole interval around the observed value 250.2 grams of the sample mean within which, if the whole population mean actually takes a value in this range, the observed data would not be considered particularly unusual."}]}, {"question": "How do you write a regression model", "positive_ctxs": [{"text": "The Linear Regression Equation The equation has the form Y= a + bX, where Y is the dependent variable (that's the variable that goes on the Y axis), X is the independent variable (i.e. it is plotted on the X axis), b is the slope of the line and a is the y-intercept."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Example: On a 1-5 scale where 1 means disagree completely and 5 means agree completely, how much do you agree with the following statement. \"The Federal government should do more to help people facing foreclosure on their homes. \"A multinomial discrete-choice model can examine the responses to these questions (model G, model H, model I)."}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "How can a Kalman filter be used in computer vision", "positive_ctxs": [{"text": "A Kalman Filter is an algorithm that can predict future positions based on current position. It can also estimate current position better than what the sensor is telling us. It will be used to have better association."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The Kalman filter is an efficient recursive filter that estimates the internal state of a linear dynamic system from a series of noisy measurements. It is used in a wide range of engineering and econometric applications from radar and computer vision to estimation of structural macroeconomic models, and is an important topic in control theory and control systems engineering. Together with the linear-quadratic regulator (LQR), the Kalman filter solves the linear\u2013quadratic\u2013Gaussian control problem (LQG)."}, {"text": "The Kalman filter has numerous applications in technology. A common application is for guidance, navigation, and control of vehicles, particularly aircraft, spacecraft and dynamically positioned ships. Furthermore, the Kalman filter is a widely applied concept in time series analysis used in fields such as signal processing and econometrics."}, {"text": "Kalman filters also are one of the main topics in the field of robotic motion planning and control and can be used in trajectory optimization. The Kalman filter also works for modeling the central nervous system's control of movement. Due to the time delay between issuing motor commands and receiving sensory feedback, use of the Kalman filter supports a realistic model for making estimates of the current state of the motor system and issuing updated commands.The algorithm works in a two-step process."}, {"text": "are Gaussian, the Kalman filter finds the exact Bayesian filtering distribution. If not, Kalman filter based methods are a first-order approximation (EKF) or a second-order approximation (UKF in general, but if probability distribution is Gaussian a third-order approximation is possible)."}, {"text": "Though regardless of Gaussianity, if the process and measurement covariances are known, the Kalman filter is the best possible linear estimator in the minimum mean-square-error sense.Extensions and generalizations to the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. The underlying model is a hidden Markov model where the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions. Also, Kalman filter has been successfully used in multi-sensor fusion, and distributed sensor networks to develop distributed or consensus Kalman filter."}, {"text": "The particle filter central to MCL can approximate multiple different kinds of probability distributions, since it is a non-parametric representation. Some other Bayesian localization algorithms, such as the Kalman filter (and variants, the extended Kalman filter and the unscented Kalman filter), assume the belief of the robot is close to being a Gaussian distribution and do not perform well for situations where the belief is multimodal. For example, a robot in a long corridor with many similar-looking doors may arrive at a belief that has a peak for each door, but the robot is unable to distinguish which door it is at."}, {"text": "If the Kalman filter works optimally, the innovation sequence (the output prediction error) is a white noise, therefore the whiteness property of the innovations measures filter performance. Several different methods can be used for this purpose. If the noise terms are non-Gaussian distributed, methods for assessing performance of the filter estimate, which use probability inequalities or large-sample theory, are known in the literature."}]}, {"question": "What is the scope of NLP", "positive_ctxs": [{"text": "Developers can make use of NLP to perform tasks like speech recognition, sentiment analysis, translation, auto-correct of grammar while typing, and automated answer generation. NLP is a challenging field since it deals with human language, which is extremely diverse and can be spoken in a lot of ways."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Given the size of many NLPs arising from a direct method, it may appear somewhat counter-intuitive that solving the nonlinear optimization problem is easier than solving the boundary-value problem. It is, however, the fact that the NLP is easier to solve than the boundary-value problem. The reason for the relative ease of computation, particularly of a direct collocation method, is that the NLP is sparse and many well-known software programs exist (e.g., SNOPT) to solve large sparse NLPs."}, {"text": "The range is a simple function of the sample maximum and minimum and these are specific examples of order statistics. In particular, the range is a linear function of order statistics, which brings it into the scope of L-estimation."}, {"text": "ASL: HOUSE I QUIET ENTERWhen the scope of the adverb is the entire clause, as in the case of time, it comes before the topic. This is the only thing which can appear before the topic in ASL: time\u2013topic\u2013comment."}, {"text": "As with all formal languages, the nature of the symbols themselves is outside the scope of formal logic; they are often regarded simply as letters and punctuation symbols."}, {"text": "Nevertheless, it is within the scope of ordinary language to say that it is essential to a triangle that it has three sides."}, {"text": "In this way, an interpretation provides semantic meaning to the terms, the predicates, and formulas of the language. The study of the interpretations of formal languages is called formal semantics. What follows is a description of the standard or Tarskian semantics for first-order logic."}]}, {"question": "Is time a discrete variable", "positive_ctxs": [{"text": "A continuous variable is one which can take on a value between any other two values, such as: indoor temperature, time spent waiting, water consumed, color wavelength, and direction of travel. A discrete variable corresponds to a digital quantity, while a continuous variable corresponds to an analog quantity."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "is countable, the random variable is called a discrete random variable and its distribution is a discrete probability distribution, i.e. can be described by a probability mass function that assigns a probability to each value in the image of"}, {"text": "is countable, the random variable is called a discrete random variable and its distribution is a discrete probability distribution, i.e. can be described by a probability mass function that assigns a probability to each value in the image of"}, {"text": "In addition, for each of the N possible states, there is a set of emission probabilities governing the distribution of the observed variable at a particular time given the state of the hidden variable at that time. The size of this set depends on the nature of the observed variable. For example, if the observed variable is discrete with M possible values, governed by a categorical distribution, there will be"}, {"text": "A Markov chain is a type of Markov process that has either a discrete state space or a discrete index set (often representing time), but the precise definition of a Markov chain varies. For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space (thus regardless of the nature of time), but it is also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space)."}, {"text": "Suppose that the probability that a pedestrian will be hit by a car, while crossing the road at a pedestrian crossing, without paying attention to the traffic light, is to be computed. Let H be a discrete random variable taking one value from {Hit, Not Hit}. Let L (for traffic light) be a discrete random variable taking one value from {Red, Yellow, Green}."}, {"text": "Suppose that the probability that a pedestrian will be hit by a car, while crossing the road at a pedestrian crossing, without paying attention to the traffic light, is to be computed. Let H be a discrete random variable taking one value from {Hit, Not Hit}. Let L (for traffic light) be a discrete random variable taking one value from {Red, Yellow, Green}."}, {"text": "Suppose that the probability that a pedestrian will be hit by a car, while crossing the road at a pedestrian crossing, without paying attention to the traffic light, is to be computed. Let H be a discrete random variable taking one value from {Hit, Not Hit}. Let L (for traffic light) be a discrete random variable taking one value from {Red, Yellow, Green}."}]}, {"question": "How do you create a dataset of an image", "positive_ctxs": [{"text": "ProcedureFrom the cluster management console, select Workload > Spark > Deep Learning.Select the Datasets tab.Click New.Create a dataset from Images for Object Classification.Provide a dataset name.Specify a Spark instance group.Specify image storage format, either LMDB for Caffe or TFRecords for TensorFlow.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Some academics and legal technology startups are attempting to create algorithmic models to predict case outcomes. Part of this overall effort involves improved case assessment for litigation funding.In order to better evaluate the quality of case outcome prediction systems, a proposal has been made to create a standardised dataset that would allow comparisons between systems."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}]}, {"question": "What are the different types of regression", "positive_ctxs": [{"text": "The different types of regression in machine learning techniques are explained below in detail:Linear Regression. Linear regression is one of the most basic types of regression in machine learning. Logistic Regression. Ridge Regression. Lasso Regression. Polynomial Regression. Bayesian Linear Regression."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The distribution of the residuals largely depends on the type and distribution of the outcome variable; different types of outcome variables lead to the variety of models within the GLiM family. Commonly used models in the GLiM family include binary logistic regression for binary or dichotomous outcomes, Poisson regression for count outcomes, and linear regression for continuous, normally distributed outcomes. This means that GLiM may be spoken of as a general family of statistical models or as specific models for specific outcome types."}, {"text": "The distribution of the residuals largely depends on the type and distribution of the outcome variable; different types of outcome variables lead to the variety of models within the GLiM family. Commonly used models in the GLiM family include binary logistic regression for binary or dichotomous outcomes, Poisson regression for count outcomes, and linear regression for continuous, normally distributed outcomes. This means that GLiM may be spoken of as a general family of statistical models or as specific models for specific outcome types."}, {"text": "In marketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data. Logistic regression or other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps:"}, {"text": "In marketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data. Logistic regression or other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps:"}, {"text": "In marketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data. Logistic regression or other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps:"}, {"text": "In marketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data. Logistic regression or other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps:"}, {"text": "In marketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data. Logistic regression or other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps:"}]}, {"question": "How is K means clustering used in prediction", "positive_ctxs": [{"text": "How to Use K-means Cluster Algorithms in Predictive AnalysisPick k random items from the dataset and label them as cluster representatives.Associate each remaining item in the dataset with the nearest cluster representative, using a Euclidean distance calculated by a similarity function.Recalculate the new clusters' representatives.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in"}, {"text": "Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in"}, {"text": "Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in"}, {"text": "Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in"}, {"text": "Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in"}, {"text": "Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in"}, {"text": "Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in"}]}, {"question": "How do I interpret p value in logistic regression", "positive_ctxs": [{"text": "The p-value for each term tests the null hypothesis that the coefficient is equal to zero (no effect). A low p-value (< 0.05) indicates that you can reject the null hypothesis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In regression analysis, overfitting occurs frequently. As an extreme example, if there are p variables in a linear regression with p data points, the fitted line can go exactly through every point. For logistic regression or Cox proportional hazards models, there are a variety of rules of thumb (e.g."}, {"text": "In regression analysis, overfitting occurs frequently. As an extreme example, if there are p variables in a linear regression with p data points, the fitted line can go exactly through every point. For logistic regression or Cox proportional hazards models, there are a variety of rules of thumb (e.g."}, {"text": "Logistic regression will always be heteroscedastic \u2013 the error variances differ for each value of the predicted score. For each value of the predicted score there would be a different value of the proportionate reduction in error. Therefore, it is inappropriate to think of R\u00b2 as a proportionate reduction in error in a universal sense in logistic regression."}, {"text": "Logistic regression will always be heteroscedastic \u2013 the error variances differ for each value of the predicted score. For each value of the predicted score there would be a different value of the proportionate reduction in error. Therefore, it is inappropriate to think of R\u00b2 as a proportionate reduction in error in a universal sense in logistic regression."}, {"text": "Logistic regression will always be heteroscedastic \u2013 the error variances differ for each value of the predicted score. For each value of the predicted score there would be a different value of the proportionate reduction in error. Therefore, it is inappropriate to think of R\u00b2 as a proportionate reduction in error in a universal sense in logistic regression."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}]}, {"question": "Why the Monty Hall problem is wrong", "positive_ctxs": [{"text": "The Monty Hall problem has confused people for decades. In the game show, Let's Make a Deal, Monty Hall asks you to guess which closed door a prize is behind. The answer is so puzzling that people often refuse to accept it! The problem occurs because our statistical assumptions are incorrect."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The problem is actually an extrapolation from the game show. Monty Hall did open a wrong door to build excitement, but offered a known lesser prize \u2013 such as $100 cash \u2013 rather than a choice to switch doors. As Monty Hall wrote to Selvin:"}, {"text": "Probability and the Monty Hall problem\", BBC News Magazine, 11 September 2013 (video). Mathematician Marcus du Sautoy explains the Monty Hall paradox."}, {"text": "Steve Selvin posed the Monty Hall problem in a pair of letters to the American Statistician in 1975. The first letter presented the problem in a version close to its presentation in Parade 15 years later. The second appears to be the first use of the term \"Monty Hall problem\"."}, {"text": "Paul Erd\u0151s, one of the most prolific mathematicians in history, remained unconvinced until he was shown a computer simulation demonstrating vos Savant's predicted result.The problem is a paradox of the veridical type, because the correct choice (that one should switch doors) is so counterintuitive it can seem absurd, but is nevertheless demonstrably true. The Monty Hall problem is mathematically closely related to the earlier Three Prisoners problem and to the much older Bertrand's box paradox."}, {"text": "Going back to Nalebuff, the Monty Hall problem is also much studied in the literature on game theory and decision theory, and also some popular solutions correspond to this point of view. Vos Savant asks for a decision, not a chance. And the chance aspects of how the car is hidden and how an unchosen door is opened are unknown."}, {"text": "The Monty Hall problem is a brain teaser, in the form of a probability puzzle, loosely based on the American television game show Let's Make a Deal and named after its original host, Monty Hall. The problem was originally posed (and solved) in a letter by Steve Selvin to the American Statistician in 1975. It became famous as a question from a reader's letter quoted in Marilyn vos Savant's \"Ask Marilyn\" column in Parade magazine in 1990:"}, {"text": "The warden obliges, (secretly) flipping a coin to decide which name to provide if the prisoner who is asking is the one being pardoned. The question is whether knowing the warden's answer changes the prisoner's chances of being pardoned. This problem is equivalent to the Monty Hall problem; the prisoner asking the question still has a 1/3 chance of being pardoned but his unnamed colleague has a 2/3 chance."}]}, {"question": "What is the difference between normal and lognormal distribution", "positive_ctxs": [{"text": "A major difference is in its shape: the normal distribution is symmetrical, whereas the lognormal distribution is not. Because the values in a lognormal distribution are positive, they create a right-skewed curve. A further distinction is that the values used to derive a lognormal distribution are normally distributed."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "the normal distribution, the lognormal distribution, the logistic distribution, the loglogistic distribution, the exponential distribution, the Fr\u00e9chet distribution, the Gumbel distribution, the Pareto distribution, the Weibull distribution and otheroften shows that a number of distributions fit the data well and do not yield significantly different results, while the differences between them may be small compared to the width of the confidence interval. This illustrates that it may be difficult to determine which distribution gives better results."}, {"text": "Because the square of a standard normal distribution is the chi-square distribution with one degree of freedom, the probability of a result such as 1 heads in 10 trials can be approximated either by using the normal distribution directly, or the chi-square distribution for the normalised, squared difference between observed and expected value. However, many problems involve more than the two possible outcomes of a binomial, and instead require 3 or more categories, which leads to the multinomial distribution. Just as de Moivre and Laplace sought for and found the normal approximation to the binomial, Pearson sought for and found a degenerate multivariate normal approximation to the multinomial distribution (the numbers in each category add up to the total sample size, which is considered fixed)."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "The main difference between the two approaches is that the GLM strictly assumes that the residuals will follow a conditionally normal distribution, while the GLiM loosens this assumption and allows for a variety of other distributions from the exponential family for the residuals. Of note, the GLM is a special case of the GLiM in which the distribution of the residuals follow a conditionally normal distribution."}, {"text": "The main difference between the two approaches is that the GLM strictly assumes that the residuals will follow a conditionally normal distribution, while the GLiM loosens this assumption and allows for a variety of other distributions from the exponential family for the residuals. Of note, the GLM is a special case of the GLiM in which the distribution of the residuals follow a conditionally normal distribution."}, {"text": "The normal distribution is the only distribution whose cumulants beyond the first two (i.e., other than the mean and variance) are zero. It is also the continuous distribution with the maximum entropy for a specified mean and variance. Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other.The normal distribution is a subclass of the elliptical distributions."}]}, {"question": "How do you prove two variables are uncorrelated", "positive_ctxs": [{"text": "If two random variables X and Y are independent, then they are uncorrelated. Proof. Uncorrelated means that their correlation is 0, or, equivalently, that the covariance between them is 0."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "In general, random variables may be uncorrelated but statistically dependent. But if a random vector has a multivariate normal distribution then any two or more of its components that are uncorrelated are independent. This implies that any two or more of its components that are pairwise independent are independent."}, {"text": "But, as pointed out just above, it is not true that two random variables that are (separately, marginally) normally distributed and uncorrelated are independent."}, {"text": "There are cases in which uncorrelatedness does imply independence. One of these cases is the one in which both random variables are two-valued (so each can be linearly transformed to have a Bernoulli distribution). Further, two jointly normally distributed random variables are independent if they are uncorrelated, although this does not hold for variables whose marginal distributions are normal and uncorrelated but whose joint distribution is not joint normal (see Normally distributed and uncorrelated does not imply independent)."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}]}, {"question": "Why dimensionality reduction is important step in machine learning", "positive_ctxs": [{"text": "Advantages of Dimensionality Reduction It helps in data compression, and hence reduced storage space. It reduces computation time. It also helps remove redundant features, if any."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Below is a summary of some of the important algorithms from the history of manifold learning and nonlinear dimensionality reduction (NLDR). Many of these non-linear dimensionality reduction methods are related to the linear methods listed below. Non-linear methods can be broadly classified into two groups: those that provide a mapping (either from the high-dimensional space to the low-dimensional embedding or vice versa), and those that just give a visualisation."}, {"text": "Data preprocessing is an important step in the data mining process. The phrase \"garbage in, garbage out\" is particularly applicable to data mining and machine learning projects. Data-gathering methods are often loosely controlled, resulting in out-of-range values (e.g., Income: \u2212100), impossible data combinations (e.g., Sex: Male, Pregnant: Yes), and missing values, etc."}, {"text": "Dimensionality reduction loses information, in general. PCA-based dimensionality reduction tends to minimize that information loss, under certain signal and noise models."}, {"text": "Dimensionality reduction loses information, in general. PCA-based dimensionality reduction tends to minimize that information loss, under certain signal and noise models."}, {"text": "Dimensionality reduction loses information, in general. PCA-based dimensionality reduction tends to minimize that information loss, under certain signal and noise models."}, {"text": "Dimensionality reduction loses information, in general. PCA-based dimensionality reduction tends to minimize that information loss, under certain signal and noise models."}, {"text": "Undercomplete dictionaries represent the setup in which the actual input data lies in a lower-dimensional space. This case is strongly related to dimensionality reduction and techniques like principal component analysis which require atoms"}]}, {"question": "What is probability theory used for", "positive_ctxs": [{"text": "Probability theory is the mathematical study of phenomena characterized by randomness or uncertainty. More precisely, probability is used for modelling situations when the result of an experiment, realized under the same circumstances, produces different results (typically throwing a dice or a coin)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Imprecise probability generalizes probability theory to allow for partial probability specifications, and is applicable when information is scarce, vague, or conflicting, in which case a unique probability distribution may be hard to identify. Thereby, the theory aims to represent the available knowledge more accurately. Imprecision is useful for dealing with expert elicitation, because:"}, {"text": "E1) A doctor is seeking an anti-depressant for a newly diagnosed patient. Suppose that, of the available anti-depressant drugs, the probability that any particular drug will be effective for a particular patient is p = 0.6. What is the probability that the first drug found to be effective for this patient is the first drug tried, the second drug tried, and so on?"}, {"text": "The mathematical statement of this problem is as follows: pick a random permutation on n elements and k values from the range 1 to n, also at random, call these marks. What is the probability that there is at least one mark on every cycle of the permutation? The claim is this probability is k/n."}, {"text": "What emerges then is that info-gap theory is yet to explain in what way, if any, it actually attempts to deal with the severity of the uncertainty under consideration. Subsequent sections of this article will address this severity issue and its methodological and practical implications."}]}, {"question": "What is discriminant analysis used for", "positive_ctxs": [{"text": "Discriminant analysis is a versatile statistical method often used by market researchers to classify observations into two or more groups or categories. In other words, discriminant analysis is used to assign objects to one group among a number of known groups."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Several variants of CA are available, including detrended correspondence analysis (DCA) and canonical correspondence analysis (CCA). The extension of correspondence analysis to many categorical variables is called multiple correspondence analysis. An adaptation of correspondence analysis to the problem of discrimination based upon qualitative variables (i.e., the equivalent of discriminant analysis for qualitative data) is called discriminant correspondence analysis or barycentric discriminant analysis."}, {"text": "Several variants of CA are available, including detrended correspondence analysis (DCA) and canonical correspondence analysis (CCA). The extension of correspondence analysis to many categorical variables is called multiple correspondence analysis. An adaptation of correspondence analysis to the problem of discrimination based upon qualitative variables (i.e., the equivalent of discriminant analysis for qualitative data) is called discriminant correspondence analysis or barycentric discriminant analysis."}]}, {"question": "Is the level of significance the same as the P value", "positive_ctxs": [{"text": "The level of statistical significance is often expressed as a p-value between 0 and 1. The smaller the p-value, the stronger the evidence that you should reject the null hypothesis. A p-value higher than 0.05 (> 0.05) is not statistically significant and indicates strong evidence for the null hypothesis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The textbook method is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the degrees of freedom of the numerator and the denominator and the significance level (\u03b1). If F \u2265 FCritical, the null hypothesis is rejected."}, {"text": "The textbook method is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the degrees of freedom of the numerator and the denominator and the significance level (\u03b1). If F \u2265 FCritical, the null hypothesis is rejected."}, {"text": "The textbook method is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the degrees of freedom of the numerator and the denominator and the significance level (\u03b1). If F \u2265 FCritical, the null hypothesis is rejected."}, {"text": "The textbook method is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the degrees of freedom of the numerator and the denominator and the significance level (\u03b1). If F \u2265 FCritical, the null hypothesis is rejected."}, {"text": "Although in principle the acceptable level of statistical significance may be subject to debate, the significance level is the largest p-value that allows the test to reject the null hypothesis. This test is logically equivalent to saying that the p-value is the probability, assuming the null hypothesis is true, of observing a result at least as extreme as the test statistic. Therefore, the smaller the significance level, the lower the probability of committing type I error."}, {"text": "Although in principle the acceptable level of statistical significance may be subject to debate, the significance level is the largest p-value that allows the test to reject the null hypothesis. This test is logically equivalent to saying that the p-value is the probability, assuming the null hypothesis is true, of observing a result at least as extreme as the test statistic. Therefore, the smaller the significance level, the lower the probability of committing type I error."}, {"text": "Although in principle the acceptable level of statistical significance may be subject to debate, the significance level is the largest p-value that allows the test to reject the null hypothesis. This test is logically equivalent to saying that the p-value is the probability, assuming the null hypothesis is true, of observing a result at least as extreme as the test statistic. Therefore, the smaller the significance level, the lower the probability of committing type I error."}]}, {"question": "What is sparse coding in neural network", "positive_ctxs": [{"text": "Sparse coding is the representation of items by the strong activation of a relatively small set of neurons. For each stimulus, this is a different subset of all available neurons."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Theoretical work on SDM by Kanerva has suggested that sparse coding increases the capacity of associative memory by reducing overlap between representations. Experimentally, sparse representations of sensory information have been observed in many systems, including vision, audition, touch, and olfaction. However, despite the accumulating evidence for widespread sparse coding and theoretical arguments for its importance, a demonstration that sparse coding improves the stimulus-specificity of associative memory has been lacking until recently."}, {"text": "is renormalized to fit the constraints and the new sparse coding is obtained again. The process is repeated until convergence (or until a sufficiently small residue)."}, {"text": "is renormalized to fit the constraints and the new sparse coding is obtained again. The process is repeated until convergence (or until a sufficiently small residue)."}, {"text": "Research has shown that unary coding is used in the neural circuits responsible for birdsong production. The use of unary in biological networks is presumably due to the inherent simplicity of the coding. Another contributing factor could be that unary coding provides a certain degree of error correction."}, {"text": "is known as sparse approximation (or sometimes just sparse coding problem). There has been developed a number of algorithms to solve it (such as matching pursuit and LASSO) which are incorporated into the algorithms described below."}, {"text": "is known as sparse approximation (or sometimes just sparse coding problem). There has been developed a number of algorithms to solve it (such as matching pursuit and LASSO) which are incorporated into the algorithms described below."}, {"text": "In applied mathematics, K-SVD is a dictionary learning algorithm for creating a dictionary for sparse representations, via a singular value decomposition approach. K-SVD is a generalization of the k-means clustering method, and it works by iteratively alternating between sparse coding the input data based on the current dictionary, and updating the atoms in the dictionary to better fit the data. K-SVD can be found widely in use in applications such as image processing, audio processing, biology, and document analysis."}]}, {"question": "What does Null Hypothesis significance testing NHST mean", "positive_ctxs": [{"text": "NHST is difficult to describe in one sentence, particularly here."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Hypothesis testing can mean any mixture of two formulations that both changed with time. Any discussion of significance testing vs hypothesis testing is doubly vulnerable to confusion."}, {"text": "Hypothesis testing can mean any mixture of two formulations that both changed with time. Any discussion of significance testing vs hypothesis testing is doubly vulnerable to confusion."}, {"text": "Hypothesis testing can mean any mixture of two formulations that both changed with time. Any discussion of significance testing vs hypothesis testing is doubly vulnerable to confusion."}, {"text": "Hypothesis testing can mean any mixture of two formulations that both changed with time. Any discussion of significance testing vs hypothesis testing is doubly vulnerable to confusion."}, {"text": "Hypothesis testing can mean any mixture of two formulations that both changed with time. Any discussion of significance testing vs hypothesis testing is doubly vulnerable to confusion."}, {"text": "Hypothesis testing can mean any mixture of two formulations that both changed with time. Any discussion of significance testing vs hypothesis testing is doubly vulnerable to confusion."}, {"text": "Hypothesis testing can mean any mixture of two formulations that both changed with time. Any discussion of significance testing vs hypothesis testing is doubly vulnerable to confusion."}]}, {"question": "What is augmentation in deep learning", "positive_ctxs": [{"text": "The performance of deep learning neural networks often improves with the amount of data available. Data augmentation is a technique to artificially create new training data from existing training data. This means, variations of the training set images that are likely to be seen by the model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Data augmentation in data analysis are techniques used to increase the amount of data by adding slightly modified copies of already existing data or newly created synthetic data from existing data. It acts as a regularizer and helps reduce overfitting when training a machine learning model. It is closely related to oversampling in data analysis."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}]}, {"question": "Can you use categorical variables in clustering", "positive_ctxs": [{"text": "If your data contains both numeric and categorical variables, the best way to carry out clustering on the dataset is to create principal components of the dataset and use the principal component scores as input into the clustering."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "An interaction may arise when considering the relationship among three or more variables, and describes a situation in which the simultaneous influence of two variables on a third is not additive. Interactions may arise with categorical variables in two ways: either categorical by categorical variable interactions, or categorical by continuous variable interactions."}, {"text": "An interaction may arise when considering the relationship among three or more variables, and describes a situation in which the simultaneous influence of two variables on a third is not additive. Interactions may arise with categorical variables in two ways: either categorical by categorical variable interactions, or categorical by continuous variable interactions."}, {"text": "An interaction may arise when considering the relationship among three or more variables, and describes a situation in which the simultaneous influence of two variables on a third is not additive. Interactions may arise with categorical variables in two ways: either categorical by categorical variable interactions, or categorical by continuous variable interactions."}, {"text": "On the other hand, with observational research you can not control for interfering variables (low internal validity) but you can measure in the natural (ecological) environment, at the place where behavior normally occurs. However, in doing so, you sacrifice internal validity."}, {"text": "For each unique value in the original categorical column, a new column is created in this method. These dummy variables are then filled up with zeros and ones (1 meaning TRUE, 0 meaning FALSE).Because this process creates multiple new variables, it is prone to creating a big p problem (too many predictors) if there are many unique values in the original column. Another downside of one-hot encoding is that it causes multicollinearity between the individual variables, which potentially reduces the model's accuracy.Also, if the categorical variable is an output variable, you may want to convert the values back into a categorical form in order to present them in your application.In practical usage this transformation is often directly performed by a function that takes categorical data as an input and outputs the corresponding dummy variables."}, {"text": "In machine learning, one-hot encoding is a frequently used method to deal with categorical data. Because many machine learning models need their input variables to be numeric, categorical variables need to be transformed in the pre-processing part."}, {"text": "For ease in statistical processing, categorical variables may be assigned numeric indices, e.g. 1 through K for a K-way categorical variable (i.e. a variable that can express exactly K possible values)."}]}, {"question": "Can a sample mean be zero", "positive_ctxs": [{"text": "If all of the values in the sample are identical, the sample standard deviation will be zero. When discussing the sample mean, we found that the sample mean for diastolic blood pressure was 71.3."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In particular, a minimal surface such as a soap film has mean curvature zero and a soap bubble has constant mean curvature. Unlike Gauss curvature, the mean curvature is extrinsic and depends on the embedding, for instance, a cylinder and a plane are locally isometric but the mean curvature of a plane is zero while that of a cylinder is nonzero."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "Firstly, if the omniscient mean is unknown (and is computed as the sample mean), then the sample variance is a biased estimator: it underestimates the variance by a factor of (n \u2212 1) / n; correcting by this factor (dividing by n \u2212 1 instead of n) is called Bessel's correction. The resulting estimator is unbiased, and is called the (corrected) sample variance or unbiased sample variance. For example, when n = 1 the variance of a single observation about the sample mean (itself) is obviously zero regardless of the population variance."}, {"text": "Firstly, if the omniscient mean is unknown (and is computed as the sample mean), then the sample variance is a biased estimator: it underestimates the variance by a factor of (n \u2212 1) / n; correcting by this factor (dividing by n \u2212 1 instead of n) is called Bessel's correction. The resulting estimator is unbiased, and is called the (corrected) sample variance or unbiased sample variance. For example, when n = 1 the variance of a single observation about the sample mean (itself) is obviously zero regardless of the population variance."}, {"text": "Firstly, if the omniscient mean is unknown (and is computed as the sample mean), then the sample variance is a biased estimator: it underestimates the variance by a factor of (n \u2212 1) / n; correcting by this factor (dividing by n \u2212 1 instead of n) is called Bessel's correction. The resulting estimator is unbiased, and is called the (corrected) sample variance or unbiased sample variance. For example, when n = 1 the variance of a single observation about the sample mean (itself) is obviously zero regardless of the population variance."}]}, {"question": "Why accuracy is not used as a preferred method for real world IR system evaluation", "positive_ctxs": [{"text": "There is a good reason why accuracy is not an appropriate measure for information retrieval problems. In almost all circumstances, the data is extremely skewed: normally over 99.9% of the documents are in the nonrelevant category."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Repeated fitness function evaluation for complex problems is often the most prohibitive and limiting segment of artificial evolutionary algorithms. Finding the optimal solution to complex high-dimensional, multimodal problems often requires very expensive fitness function evaluations. In real world problems such as structural optimization problems, a single function evaluation may require several hours to several days of complete simulation."}, {"text": "Akaike information criterion (AIC) method of model selection, and a comparison with MML: Dowe, D.L. ; Gardner, S.; Oppy, G. (Dec 2007). Why Simplicity is no Problem for Bayesians\"."}, {"text": "In logic simulation, a common mistake in evaluation of accurate models is to compare a logic simulation model to a transistor circuit simulation model. This is a comparison of differences in precision, not accuracy. Precision is measured with respect to detail and accuracy is measured with respect to reality."}, {"text": "In logic simulation, a common mistake in evaluation of accurate models is to compare a logic simulation model to a transistor circuit simulation model. This is a comparison of differences in precision, not accuracy. Precision is measured with respect to detail and accuracy is measured with respect to reality."}, {"text": "FTIR spectrometers are mostly used for measurements in the mid and near IR regions. For the mid-IR region, 2\u221225 \u03bcm (5,000\u2013400 cm\u22121), the most common source is a silicon carbide element heated to about 1,200 K (Globar). The output is similar to a blackbody."}, {"text": "Augmented reality (AR) is an interactive experience of a real-world environment where the objects that reside in the real world are enhanced by computer-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory and olfactory. AR can be defined as a system that fulfills three basic features: a combination of real and virtual worlds, real-time interaction, and accurate 3D registration of virtual and real objects. The overlaid sensory information can be constructive (i.e."}, {"text": "The F-score is also used in machine learning. However, the F-measures do not take true negatives into account, hence measures such as the Matthews correlation coefficient, Informedness or Cohen's kappa may be preferred to assess the performance of a binary classifier.The F-score has been widely used in the natural language processing literature, such as in the evaluation of named entity recognition and word segmentation."}]}, {"question": "What is padding in deep learning", "positive_ctxs": [{"text": "Padding is a term relevant to convolutional neural networks as it refers to the amount of pixels added to an image when it is being processed by the kernel of a CNN. For example, if the padding in a CNN is set to zero, then every pixel value that is added will be of value zero."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}]}, {"question": "What is resampling in signal processing", "positive_ctxs": [{"text": "resample Function One resampling application is the conversion of digitized audio signals from one sample rate to another, such as from 48 kHz (the digital audio tape standard) to 44.1 kHz (the compact disc standard). resample applies a lowpass filter to the input sequence to prevent aliasing during resampling."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In digital signal processing, downsampling, compression, and decimation are terms associated with the process of resampling in a multi-rate digital signal processing system. Both downsampling and decimation can be synonymous with compression, or they can describe an entire process of bandwidth reduction (filtering) and sample-rate reduction. When the process is performed on a sequence of samples of a signal or other continuous function, it produces an approximation of the sequence that would have been obtained by sampling the signal at a lower rate (or density, as in the case of a photograph)."}, {"text": "Analog discrete-time signal processing is a technology based on electronic devices such as sample and hold circuits, analog time-division multiplexers, analog delay lines and analog feedback shift registers. This technology was a predecessor of digital signal processing (see below), and is still used in advanced processing of gigahertz signals."}, {"text": "Statistical signal processing is an approach which treats signals as stochastic processes, utilizing their statistical properties to perform signal processing tasks. Statistical techniques are widely used in signal processing applications. For example, one can model the probability distribution of noise incurred when photographing an image, and construct techniques based on this model to reduce the noise in the resulting image."}, {"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}, {"text": "Transition to track is normally manual for non-Newtonian signal sources, but additional signal processing can be used to automate the process. Doppler velocity feedback must be disabled in the vicinity of the signal source to develop track data."}, {"text": "This signal processing strategy is used in pulse-Doppler radar and multi-mode radar, which can then be pointed into regions containing a large number of slow-moving reflectors without overwhelming computer software and operators. Other signal processing strategies, like moving target indication, are more appropriate for benign clear blue sky environments."}, {"text": "pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W. Rosenbluth.The use of Sequential Monte Carlo in advanced signal processing and Bayesian inference is more recent. It was in 1993, that Gordon et al., published in their seminal work the first application of a Monte Carlo resampling algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state-space or the noise of the system."}]}, {"question": "What is input vector", "positive_ctxs": [{"text": "In computer science and engineering, a test vector is a set of inputs provided to a system in order to test that system. In software development, test vectors are a methodology of software testing and software verification and validation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "If the vigilance parameter is overcome (i.e. the input vector is within the normal range seen on previous input vectors), then training commences:"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Attention: The input to the decoder is a single vector which stores the entire context. Attention allows the decoder to look at the input sequence selectively."}, {"text": "Consider a binary classification problem with a dataset (x1, y1), ..., (xn, yn), where xi is an input vector and yi \u2208 {-1, +1} is a binary label corresponding to it. A soft-margin support vector machine is trained by solving a quadratic programming problem, which is expressed in the dual form as follows:"}, {"text": "In practice, the training dataset often consists of pairs of an input vector (or scalar) and the corresponding output vector (or scalar), where the answer key is commonly denoted as the target (or label). The current model is run with the training dataset and produces a result, which is then compared with the target, for each input vector in the training dataset. Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted."}, {"text": "In practice, the training dataset often consists of pairs of an input vector (or scalar) and the corresponding output vector (or scalar), where the answer key is commonly denoted as the target (or label). The current model is run with the training dataset and produces a result, which is then compared with the target, for each input vector in the training dataset. Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted."}]}, {"question": "Which algorithm is right for machine learning", "positive_ctxs": [{"text": "An easy guide to choose the right Machine Learning algorithmSize of the training data. It is usually recommended to gather a good amount of data to get reliable predictions. Accuracy and/or Interpretability of the output. Speed or Training time. Linearity. Number of features."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Stability, also known as algorithmic stability, is a notion in computational learning theory of how a machine learning algorithm is perturbed by small changes to its inputs. A stable learning algorithm is one for which the prediction does not change much when the training data is modified slightly. For instance, consider a machine learning algorithm that is being trained to recognize handwritten letters of the alphabet, using 1000 examples of handwritten letters and their labels (\"A\" to \"Z\") as a training set."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "How do you find the probability of multiple events", "positive_ctxs": [{"text": "Just multiply the probability of the first event by the second. For example, if the probability of event A is 2/9 and the probability of event B is 3/9 then the probability of both events happening at the same time is (2/9)*(3/9) = 6/81 = 2/27."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Of those that survive, at what rate will they die or fail? Can multiple causes of death or failure be taken into account? How do particular circumstances or characteristics increase or decrease the probability of survival?"}, {"text": "Of those that survive, at what rate will they die or fail? Can multiple causes of death or failure be taken into account? How do particular circumstances or characteristics increase or decrease the probability of survival?"}, {"text": "Probability is a way of assigning every \"event\" a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) be assigned a value of one. To qualify as a probability distribution, the assignment of values must satisfy the requirement that if you look at a collection of mutually exclusive events (events that contain no common results, e.g., the events {1,6}, {3}, and {2,4} are all mutually exclusive), the probability that any of these events occurs is given by the sum of the probabilities of the events.The probability that any one of the events {1,6}, {3}, or {2,4} will occur is 5/6. This is the same as saying that the probability of event {1,2,3,4,6} is 5/6."}, {"text": "The probability of an event is different, but related, and can be calculated from the odds, and vice versa. The probability of rolling a 5 or 6 is the fraction of the number of events over total events or 2/(2+4), which is 1/3, 0.33 or 33%.When gambling, odds are often the ratio of winnings to the stake and you also get your wager returned. So wagering 1 at 1:5 pays out 6 (5 + 1)."}, {"text": "Nature has established patterns originating in the return of events but only for the most part. New illnesses flood the human race, so that no matter how many experiments you have done on corpses, you have not thereby imposed a limit on the nature of events so that in the future they could not vary."}, {"text": "The probability measure function must satisfy two simple requirements: First, the probability of a countable union of mutually exclusive events must be equal to the countable sum of the probabilities of each of these events. For example, the probability of the union of the mutually exclusive events"}]}, {"question": "How do you make the dots on a scatter plot bigger", "positive_ctxs": [{"text": "To format the size of data points in a scatter plot graph, right click any of the data points and select 'format data series' then select marker options and customize for larger or smaller data points."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For a set of data variables (dimensions) X1, X2, ... , Xk, the scatter plot matrix shows all the pairwise scatter plots of the variables on a single view with multiple scatterplots in a matrix format. For k variables, the scatterplot matrix will contain k rows and k columns. A plot located on the intersection of i-th row and j-th column is a plot of variables Xi versus Xj."}, {"text": "For a set of data variables (dimensions) X1, X2, ... , Xk, the scatter plot matrix shows all the pairwise scatter plots of the variables on a single view with multiple scatterplots in a matrix format. For k variables, the scatterplot matrix will contain k rows and k columns. A plot located on the intersection of i-th row and j-th column is a plot of variables Xi versus Xj."}, {"text": "A scatter plot (also called a scatterplot, scatter graph, scatter chart, scattergram, or scatter diagram) is a type of plot or mathematical diagram using Cartesian coordinates to display values for typically two variables for a set of data. If the points are coded (color/shape/size), one additional variable can be displayed."}, {"text": "A scatter plot (also called a scatterplot, scatter graph, scatter chart, scattergram, or scatter diagram) is a type of plot or mathematical diagram using Cartesian coordinates to display values for typically two variables for a set of data. If the points are coded (color/shape/size), one additional variable can be displayed."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "A person with a lung capacity of 400 cl who held their breath for 21.7 seconds would be represented by a single dot on the scatter plot at the point (400, 21.7) in the Cartesian coordinates. The scatter plot of all the people in the study would enable the researcher to obtain a visual comparison of the two variables in the data set, and will help to determine what kind of relationship there might be between the two variables."}, {"text": "A person with a lung capacity of 400 cl who held their breath for 21.7 seconds would be represented by a single dot on the scatter plot at the point (400, 21.7) in the Cartesian coordinates. The scatter plot of all the people in the study would enable the researcher to obtain a visual comparison of the two variables in the data set, and will help to determine what kind of relationship there might be between the two variables."}]}, {"question": "What is sparse data in machine learning", "positive_ctxs": [{"text": "A common problem in machine learning is sparse data, which alters the performance of machine learning algorithms and their ability to calculate accurate predictions. Data is considered sparse when certain expected values in a dataset are missing, which is a common phenomenon in general large scaled data analysis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "When storing and manipulating sparse matrices on a computer, it is beneficial and often necessary to use specialized algorithms and data structures that take advantage of the sparse structure of the matrix. Specialized computers have been made for sparse matrices, as they are common in the machine learning field. Operations using standard dense-matrix structures and algorithms are slow and inefficient when applied to large sparse matrices as processing and memory are wasted on the zeros."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}]}, {"question": "What is the difference between time series and regression", "positive_ctxs": [{"text": "Regression: This is a tool used to evaluate the relationship of a dependent variable in relation to multiple independent variables. A regression will analyze the mean of the dependent variable in relation to changes in the independent variables. Time Series: A time series measures data over a specific period of time."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In some disciplines, the RMSD is used to compare differences between two things that may vary, neither of which is accepted as the \"standard\". For example, when measuring the average difference between two time series"}, {"text": "In some disciplines, the RMSD is used to compare differences between two things that may vary, neither of which is accepted as the \"standard\". For example, when measuring the average difference between two time series"}, {"text": "One way to make some time series stationary is to compute the differences between consecutive observations. This is known as differencing. Differencing can help stabilize the mean of a time series by removing changes in the level of a time series, and so eliminating trend and seasonality."}, {"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}, {"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}, {"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}, {"text": "This is often done by using a related series known for all relevant dates. Alternatively polynomial interpolation or spline interpolation is used where piecewise polynomial functions are fit into time intervals such that they fit smoothly together. A different problem which is closely related to interpolation is the approximation of a complicated function by a simple function (also called regression).The main difference between regression and interpolation is that polynomial regression gives a single polynomial that models the entire data set."}]}, {"question": "What is the statistical problem solving process", "positive_ctxs": [{"text": "Consider statistics as a problem-solving process and examine its four components: asking questions, collecting appropriate data, analyzing the data, and interpreting the results. This session investigates the nature of data and its potential sources of variation. Variables, bias, and random sampling are introduced."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "In Multi-agent systems the main focus is how agents coordinate their knowledge and activities. For distributed problem solving the major focus is how the problem is decomposed and the solutions are synthesized."}, {"text": "In Multi-agent systems the main focus is how agents coordinate their knowledge and activities. For distributed problem solving the major focus is how the problem is decomposed and the solutions are synthesized."}, {"text": "In Multi-agent systems the main focus is how agents coordinate their knowledge and activities. For distributed problem solving the major focus is how the problem is decomposed and the solutions are synthesized."}, {"text": "The minimization problem above is not convex because of the \u21130-\"norm\" and solving this problem is NP-hard. In some cases L1-norm is known to ensure sparsity and so the above becomes a convex optimization problem with respect to each of the variables"}, {"text": "The minimization problem above is not convex because of the \u21130-\"norm\" and solving this problem is NP-hard. In some cases L1-norm is known to ensure sparsity and so the above becomes a convex optimization problem with respect to each of the variables"}, {"text": "Instead of solving a sequence of broken-down problems, this approach directly solves the problem altogether. To avoid solving a linear system involving the large kernel matrix, a low-rank approximation to the matrix is often used in the kernel trick."}]}, {"question": "What is Eigen analysis", "positive_ctxs": [{"text": "Eigenanalysis is a mathematical operation on a square, symmetric matrix. A square matrix has the same number of rows as columns. A symmetric matrix is the same if you switch rows and columns. Distance and similarity matrices are nearly always square and symmetric."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "It is common to make decisions under uncertainty. What can be done to make good (or at least the best possible) decisions under conditions of uncertainty? Info-gap robustness analysis evaluates each feasible decision by asking: how much deviation from an estimate of a parameter value, function, or set, is permitted and yet \"guarantee\" acceptable performance?"}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}]}, {"question": "What does Homoscedasticity mean in regression", "positive_ctxs": [{"text": "Simply put, homoscedasticity means \u201chaving the same scatter.\u201d For it to exist in a set of data, the points must be about the same distance from the line, as shown in the picture above. The opposite is heteroscedasticity (\u201cdifferent scatter\u201d), where points are at widely varying distances from the regression line."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "But what does \"twice as likely\" mean in terms of a probability? It cannot literally mean to double the probability value (e.g. 50% becomes 100%, 75% becomes 150%, etc.)."}, {"text": "But what does \"twice as likely\" mean in terms of a probability? It cannot literally mean to double the probability value (e.g. 50% becomes 100%, 75% becomes 150%, etc.)."}, {"text": "At the time, ridge regression was the most popular technique for improving prediction accuracy. Ridge regression improves prediction error by shrinking the sum of the squares of the regression coefficients to be less than a fixed value in order to reduce overfitting, but it does not perform covariate selection and therefore does not help to make the model more interpretable."}, {"text": "A mean does not just \"smooth\" the data. A mean is a form of low-pass filter. The effects of the particular filter used should be understood in order to make an appropriate choice."}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}]}, {"question": "How do you know which is the explanatory variable", "positive_ctxs": [{"text": "If you have both a response variable and an explanatory variable, the explanatory variable is always plotted on the x-axis (the horizontal axis). The response variable is always plotted on the y-axis (the vertical axis)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Second, for each explanatory variable of interest, one wants to know whether its estimated coefficient differs significantly from zero\u2014that is, whether this particular explanatory variable in fact has explanatory power in predicting the response variable. Here the null hypothesis is that the true coefficient is zero. This hypothesis is tested by computing the coefficient's t-statistic, as the ratio of the coefficient estimate to its standard error."}, {"text": "Second, for each explanatory variable of interest, one wants to know whether its estimated coefficient differs significantly from zero\u2014that is, whether this particular explanatory variable in fact has explanatory power in predicting the response variable. Here the null hypothesis is that the true coefficient is zero. This hypothesis is tested by computing the coefficient's t-statistic, as the ratio of the coefficient estimate to its standard error."}, {"text": "Suppose the police officers then stop a driver at random to administer a breathalyzer test. It indicates that the driver is drunk. We assume you do not know anything else about them."}, {"text": "Two hypothesis tests are particularly widely used. First, one wants to know if the estimated regression equation is any better than simply predicting that all values of the response variable equal its sample mean (if not, it is said to have no explanatory power). The null hypothesis of no explanatory value of the estimated regression is tested using an F-test."}, {"text": "Two hypothesis tests are particularly widely used. First, one wants to know if the estimated regression equation is any better than simply predicting that all values of the response variable equal its sample mean (if not, it is said to have no explanatory power). The null hypothesis of no explanatory value of the estimated regression is tested using an F-test."}, {"text": "If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute."}]}, {"question": "How do you do multi class classification", "positive_ctxs": [{"text": "Approach \u2013Load dataset from source.Split the dataset into \u201ctraining\u201d and \u201ctest\u201d data.Train Decision tree, SVM, and KNN classifiers on the training data.Use the above classifiers to predict labels for the test data.Measure accuracy and visualise classification."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}]}, {"question": "How can I choose among classification algorithms to work with", "positive_ctxs": [{"text": "Here are some important considerations while choosing an algorithm.Size of the training data. It is usually recommended to gather a good amount of data to get reliable predictions. Accuracy and/or Interpretability of the output. Speed or Training time. Linearity. Number of features."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "We wish to select, from among the candidate models, the model that minimizes the information loss. We cannot choose with certainty, but we can minimize the estimated information loss."}, {"text": "I have never encountered any problem where genetic algorithms seemed to me the right way to attack it. Further, I have never seen any computational results reported using genetic algorithms that have favorably impressed me. Stick to simulated annealing for your heuristic search voodoo needs."}, {"text": "ASL makes heavy use of time-sequenced ordering, meaning that events are signed in the order in which they occur. For example, for I was late to class last night because my boss handed me a huge stack of work after lunch yesterday, one would sign 'yesterday lunch finish, boss give-me work big-stack, night class late-me'. In stories, however, ordering is malleable, since one can choose to sequence the events either in the order in which they occurred or in the order in which one found out about them."}, {"text": "Attorney and business ethics expert Lauren Bloom, author of The Art of the Apology, mentions the \"if apology\" as a favorite of politicians, with lines such as \"I apologize if I offended anyone\". Comedian Harry Shearer has coined the term Ifpology for its frequent appearances on \"The Apologies of the Week\" segment of Le Show.One of the first references was in The New York Times by Richard Mooney in his 1992 editorial notebook \"If This Sounds Slippery ... How to Apologize and Admit Nothing\". This was mainly in regard to Senator Bob Packwood: \"Only in the event that someone should choose to take offense, why then he's sorry\"."}, {"text": ", but can only choose the estimator among natural estimators. A natural estimator assigns equal probability to the symbols which appear the same number of time in the sample. The regret of the oracle is"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Presence of interactions and non-linearities. If each of the features makes an independent contribution to the output, then algorithms based on linear functions (e.g., linear regression, logistic regression, Support Vector Machines, naive Bayes) and distance functions (e.g., nearest neighbor methods, support vector machines with Gaussian kernels) generally perform well. However, if there are complex interactions among features, then algorithms such as decision trees and neural networks work better, because they are specifically designed to discover these interactions."}]}, {"question": "Are genetic algorithms artificial intelligence", "positive_ctxs": [{"text": "Genetic algorithms are stochastic search algorithms which act on a population of possible solutions. Genetic algorithms are used in artificial intelligence like other search algorithms are used in artificial intelligence \u2014 to search a space of potential solutions to find one which solves the problem."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Are there limits to how intelligent machines\u2014or human-machine hybrids\u2014can be? A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. Superintelligence may also refer to the form or degree of intelligence possessed by such an agent."}, {"text": "Are there limits to how intelligent machines\u2014or human-machine hybrids\u2014can be? A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. Superintelligence may also refer to the form or degree of intelligence possessed by such an agent."}, {"text": "Are there limits to how intelligent machines\u2014or human-machine hybrids\u2014can be? A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. Superintelligence may also refer to the form or degree of intelligence possessed by such an agent."}, {"text": "These absorption models are represented by Feynman-Kac models. The long time behavior of these processes conditioned on non-extinction can be expressed in an equivalent way by quasi-invariant measures, Yaglom limits, or invariant measures of nonlinear normalized Feynman-Kac flows.In computer sciences, and more particularly in artificial intelligence these mean field type genetic algorithms are used as random search heuristics that mimic the process of evolution to generate useful solutions to complex optimization problems. These stochastic search algorithms belongs to the class of Evolutionary models."}, {"text": "For specific optimization problems and problem instances, other optimization algorithms may be more efficient than genetic algorithms in terms of speed of convergence. Alternative and complementary algorithms include evolution strategies, evolutionary programming, simulated annealing, Gaussian adaptation, hill climbing, and swarm intelligence (e.g. : ant colony optimization, particle swarm optimization) and methods based on integer linear programming."}, {"text": "Parallel implementations of genetic algorithms come in two flavors. Coarse-grained parallel genetic algorithms assume a population on each of the computer nodes and migration of individuals among the nodes. Fine-grained parallel genetic algorithms assume an individual on each processor node which acts with neighboring individuals for selection and reproduction."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}]}, {"question": "How is Taguchi quality loss function calculated", "positive_ctxs": [{"text": "Taguchi loss function formulaL is the loss function.y is the value of the characteristic you are measuring (e.g. length of product)m is the value you are aiming for (in our example, perfect length for the product)k is a proportionality constant (i.e. just a number)"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Through his concept of the quality loss function, Taguchi explained that from the customer's point of view this drop of quality is not sudden. The customer experiences a loss of quality the moment product specification deviates from the 'target value'. This 'loss' is depicted by a quality loss function and it follows a parabolic curve mathematically given by L = k(y\u2013m)2, where m is the theoretical 'target value' or 'mean value' and y is the actual size of the product, k is a constant and L is the loss."}, {"text": "The Taguchi loss function is graphical depiction of loss developed by the Japanese business statistician Genichi Taguchi to describe a phenomenon affecting the value of products produced by a company. Praised by Dr. W. Edwards Deming (the business guru of the 1980s American quality movement), it made clear the concept that quality does not suddenly plummet when, for instance, a machinist exceeds a rigid blueprint tolerance. Instead 'loss' in value progressively increases as variation increases from the intended condition."}, {"text": "This equation is true for a single product; if 'loss' is to be calculated for multiple products the loss function is given by L = k[S2 + ("}, {"text": "The most common loss function for regression is the square loss function (also known as the L2-norm). This familiar loss function is used in Ordinary Least Squares regression."}, {"text": "How to define the \"simplicity\" of the manifold is problem-dependent, however, it is commonly measured by the intrinsic dimensionality and/or the smoothness of the manifold. Usually, the principal manifold is defined as a solution to an optimization problem. The objective function includes a quality of data approximation and some penalty terms for the bending of the manifold."}, {"text": "There is a lot of flexibility allowed in the choice of loss function. As long as the loss function is monotonic and continuously differentiable, the classifier is always driven toward purer solutions. Zhang (2004) provides a loss function based on least squares, a modified Huber loss function:"}, {"text": "One commonly used algorithm to find the set of weights that minimizes the error is gradient descent. By backpropagation, the steepest descent direction is calculated of the loss function versus the present synaptic weights. Then, the weights can be modified along the steepest descent direction, and the error is minimized in an efficient way."}]}, {"question": "Why do we need to perform exploratory data analysis", "positive_ctxs": [{"text": "Exploratory Data Analysis is one of the important steps in the data analysis process. Exploratory Data Analysis is a crucial step before you jump to machine learning or modeling of your data. It provides the context needed to develop an appropriate model \u2013 and interpret the results correctly."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "To do that, we need to perform the relevant integration by substitution: thus, we need to multiply by the derivative of the (natural) logarithm function, which is 1/y. Hence, the transformed distribution has the following probability density function:"}, {"text": "Also, one should not follow up an exploratory analysis with a confirmatory analysis in the same dataset. An exploratory analysis is used to find ideas for a theory, but not to test that theory as well. When a model is found exploratory in a dataset, then following up that analysis with a confirmatory analysis in the same dataset could simply mean that the results of the confirmatory analysis are due to the same type 1 error that resulted in the exploratory model in the first place."}, {"text": "Then we might wish to sample them more frequently than their prevalence in the population. For example, suppose there is a disease that affects 1 person in 10,000 and to collect our data we need to do a complete physical. It may be too expensive to do thousands of physicals of healthy people in order to obtain data for only a few diseased individuals."}, {"text": "Then we might wish to sample them more frequently than their prevalence in the population. For example, suppose there is a disease that affects 1 person in 10,000 and to collect our data we need to do a complete physical. It may be too expensive to do thousands of physicals of healthy people in order to obtain data for only a few diseased individuals."}, {"text": "Then we might wish to sample them more frequently than their prevalence in the population. For example, suppose there is a disease that affects 1 person in 10,000 and to collect our data we need to do a complete physical. It may be too expensive to do thousands of physicals of healthy people in order to obtain data for only a few diseased individuals."}, {"text": "In the main analysis phase either an exploratory or confirmatory approach can be adopted. Usually the approach is decided before data is collected. In an exploratory analysis no clear hypothesis is stated before analysing the data, and the data is searched for models that describe the data well."}, {"text": "\u201c\u2026prose to describe an algorithm, ignoring the implementation details. At this level, we do not need to mention how the machine manages its tape or head.\""}]}, {"question": "Is squared loss convex", "positive_ctxs": [{"text": "Fortunately, hinge loss, logistic loss and square loss are all convex functions. Convexity ensures global minimum and it's computationally appleaing."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The Pseudo-Huber loss function can be used as a smooth approximation of the Huber loss function. It combines the best properties of L2 squared loss and L1 absolute loss by being strongly convex when close to the target/minimum and less steep for extreme values. This steepness can be controlled by the"}, {"text": "which states that if a statistic that is unbiased, complete and sufficient for some parameter \u03b8, then it is the best mean-unbiased estimator for \u03b8. In other words, this statistic has a smaller expected loss for any convex loss function; in many practical applications with the squared loss-function, it has a smaller mean squared error among any estimators with the same expected value."}, {"text": "Boosting can be seen as minimization of a convex loss function over a convex set of functions. Specifically, the loss being minimized by AdaBoost is the exponential loss"}, {"text": ", provides such a convex relaxation. In fact, the hinge loss is the tightest convex upper bound to the 0\u20131 misclassification loss function, and with infinite data returns the Bayes-optimal solution:"}, {"text": "As the loss is convex the optimum solution lies at gradient zero. The gradient of the loss function is (using Denominator layout convention):"}, {"text": "As the loss is convex the optimum solution lies at gradient zero. The gradient of the loss function is (using Denominator layout convention):"}, {"text": "As the loss is convex the optimum solution lies at gradient zero. The gradient of the loss function is (using Denominator layout convention):"}]}, {"question": "What is an activation value *", "positive_ctxs": [{"text": "The input nodes take in information, in the form which can be numerically expressed. The information is presented as activation values, where each node is given a number, the higher the number, the greater the activation. The output nodes then reflect the input in a meaningful way to the outside world."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "and :) is parsed as if parenthesized. Also, note that the immediate, unparenthesized result of a C cast expression cannot be the operand of sizeof. Therefore, sizeof (int) * x is interpreted as (sizeof(int)) * x and not sizeof ((int) * x)."}, {"text": "A series of modified data is obtained by multiplying the trend-cycle, seasonal component, and adjusted irregular component together.Repeat whole process two more times with modified data. On final iteration, the 3 * 5 MA of Steps 11 and 12 is replaced by either a 3 * 3, 3 * 5, or 3 * 9 moving average, depending on the variability in the data."}, {"text": "the (1.3452 * car + 0.2828 * truck) component could be interpreted as \"vehicle\". However, it is very likely that cases close to"}, {"text": "the (1.3452 * car + 0.2828 * truck) component could be interpreted as \"vehicle\". However, it is very likely that cases close to"}, {"text": "the (1.3452 * car + 0.2828 * truck) component could be interpreted as \"vehicle\". However, it is very likely that cases close to"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}]}, {"question": "How does activation spread through a semantic network", "positive_ctxs": [{"text": "Spreading activation is a method for searching associative networks, biological and artificial neural networks, or semantic networks. Spreading activation can also be applied in information retrieval, by means of a network of nodes representing documents and terms contained in those documents."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "When the activation function is non-linear, then a two-layer neural network can be proven to be a universal function approximator. This is known as the Universal Approximation Theorem. The identity activation function does not satisfy this property."}, {"text": "A semantic network, or frame network is a knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts, mapping or connecting semantic fields."}, {"text": "When activation functions have this property, the neural network will learn efficiently when its weights are initialized with small random values. When the activation function does not approximate identity near the origin, special care must be used when initializing the weights. In the table below, activation functions where"}, {"text": "Symmetric connections enables a global energy formulation. During inference the network updates each state using the standard activation step function. Symmetric weights guarantees convergence to a stable activation pattern.Hopfield networks are used as CAMs and are guaranteed to settle to a some pattern."}, {"text": "Symmetric connections enables a global energy formulation. During inference the network updates each state using the standard activation step function. Symmetric weights guarantees convergence to a stable activation pattern.Hopfield networks are used as CAMs and are guaranteed to settle to a some pattern."}, {"text": "Symmetric connections enables a global energy formulation. During inference the network updates each state using the standard activation step function. Symmetric weights guarantees convergence to a stable activation pattern.Hopfield networks are used as CAMs and are guaranteed to settle to a some pattern."}, {"text": "Contagion maps use multiple contagions on a network to map the nodes as a point cloud. In the case of the Global cascades model the speed of the spread can be adjusted with the threshold parameter"}]}, {"question": "What are some applications of AI in real life", "positive_ctxs": [{"text": "Examples of Artificial Intelligence: Work & School1 \u2013 Google's AI-Powered Predictions. 2 \u2013 Ridesharing Apps Like Uber and Lyft. 3 \u2014 Commercial Flights Use an AI Autopilot.1 \u2013 Spam Filters.2 \u2013 Smart Email Categorization.1 \u2013Plagiarism Checkers. 2 \u2013Robo-readers. 1 \u2013 Mobile Check Deposits.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Artificial intelligence applications have been used in a wide range of fields including medical diagnosis, stock trading, robot control, law, scientific discovery, video games, and toys. However, many AI applications are not perceived as AI: \"A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.\" \"Many thousands of AI applications are deeply embedded in the infrastructure of every industry.\""}, {"text": "\"According to Stottler Henke, \"The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques."}, {"text": "Engaged: real life tasks are reflected in the activities conducted for learning.Active learning requires appropriate learning environments through the implementation of correct strategy. Characteristics of learning environment are:"}, {"text": "The United States and other nations are developing AI applications for a range of military functions. The main military applications of Artificial Intelligence and Machine Learning are to enhance C2, Communications, Sensors, Integration and Interoperability. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles."}, {"text": "The United States and other nations are developing AI applications for a range of military functions. The main military applications of Artificial Intelligence and Machine Learning are to enhance C2, Communications, Sensors, Integration and Interoperability. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles."}, {"text": "The Asilomar AI Principles, which contain only the principles agreed to by 90% of the attendees of the Future of Life Institute's Beneficial AI 2017 conference, agree in principle that \"There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities\" and \"Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.\" AI safety advocates such as Bostrom and Tegmark have criticized the mainstream media's use of \"those inane Terminator pictures\" to illustrate AI safety concerns: \"It can't be much fun to have aspersions cast on one's academic discipline, one's professional community, one's life work ... I call on all sides to practice patience and restraint, and to engage in direct dialogue and collaboration as much as possible."}, {"text": "Leading AI researcher Rodney Brooks writes, \"I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI and the enormity and complexity of building sentient volitional intelligence. \"Lethal autonomous weapons are of concern."}]}, {"question": "How do I find the value of Tensor", "positive_ctxs": [{"text": "There are two main ways to access subsets of the elements in a tensor, either of which should work for your example.Use the indexing operator (based on tf. slice() ) to extract a contiguous slice from the tensor. input = tf. Use the tf. gather() op to select a non-contiguous slice from the tensor. input = tf."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The likelihood-ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e. on what probability of Type I error is considered tolerable (Type I errors consist of the rejection of a null hypothesis that is true)."}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}]}, {"question": "What does a sampling distribution of sample means represent", "positive_ctxs": [{"text": "The sampling distribution of the sample mean can be thought of as \"For a sample of size n, the sample mean will behave according to this distribution.\" Any random draw from that sampling distribution would be interpreted as the mean of a sample of n observations from the original population."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "In statistics, sampling errors are incurred when the statistical characteristics of a population are estimated from a subset, or sample, of that population. Since the sample does not include all members of the population, statistics of the sample (often known as estimators), such as means and quartiles, generally differ from the statistics of the entire population (known as parameters). The difference between the sample statistic and population parameter is considered the sampling error."}, {"text": "In statistics, a sampling distribution or finite-sample distribution is the probability distribution of a given random-sample-based statistic. If an arbitrarily large number of samples, each involving multiple observations (data points), were separately used in order to compute one value of a statistic (such as, for example, the sample mean or sample variance) for each sample, then the sampling distribution is the probability distribution of the values that the statistic takes on. In many contexts, only one sample is observed, but the sampling distribution can be found theoretically."}, {"text": "An unbiased random selection of individuals is important so that if many samples were drawn, the average sample would accurately represent the population. However, this does not guarantee that a particular sample is a perfect representation of the population. Simple random sampling merely allows one to draw externally valid conclusions about the entire population based on the sample."}, {"text": "An unbiased random selection of individuals is important so that if many samples were drawn, the average sample would accurately represent the population. However, this does not guarantee that a particular sample is a perfect representation of the population. Simple random sampling merely allows one to draw externally valid conclusions about the entire population based on the sample."}]}, {"question": "What does the determinant of the correlation matrix represent", "positive_ctxs": [{"text": "The determinant is related to the volume of the space occupied by the swarm of data points represented by standard scores on the measures involved. When the measures are correlated, the space occupied becomes an ellipsoid whose volume is less than 1."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The determinant of an endomorphism is the determinant of the matrix representing the endomorphism in terms of some ordered basis. This definition makes sense, since this determinant is independent of the choice of the basis."}, {"text": "The determinant of Fisher's information matrix is of interest (for example for the calculation of Jeffreys prior probability). From the expressions for the individual components of the Fisher information matrix, it follows that the determinant of Fisher's (symmetric) information matrix for the beta distribution is:"}, {"text": "The determinant of Fisher's information matrix is of interest (for example for the calculation of Jeffreys prior probability). From the expressions for the individual components, it follows that the determinant of Fisher's (symmetric) information matrix for the beta distribution with four parameters is:"}, {"text": "s. The second term on the right will be a diagonal matrix with terms less than unity. The first term on the right is the \"reduced correlation matrix\" and will be equal to the correlation matrix except for its diagonal values which will be less than unity. These diagonal elements of the reduced correlation matrix are called \"communalities\" (which represent the fraction of the variance in the observed variable that is accounted for by the factors):"}, {"text": "Different bases of translation vectors generate the same lattice if and only if one is transformed into the other by a matrix of integer coefficients of which the absolute value of the determinant is 1. The absolute value of the determinant of the matrix formed by a set of translation vectors is the hypervolume of the n-dimensional parallelepiped the set subtends (also called the covolume of the lattice). This parallelepiped is a fundamental region of the symmetry: any pattern on or in it is possible, and this defines the whole object."}, {"text": "The determinant det (A) of a square matrix A is a scalar that tells whether the associated map is an isomorphism or not: to be so it is sufficient and necessary that the determinant is nonzero. The linear transformation of Rn corresponding to a real n-by-n matrix is orientation preserving if and only if its determinant is positive."}, {"text": "Equation (2) has a nonzero solution v if and only if the determinant of the matrix (A \u2212 \u03bbI) is zero. Therefore, the eigenvalues of A are values of \u03bb that satisfy the equation"}]}, {"question": "How is technology used to find the area to the right of Z", "positive_ctxs": [{"text": "0:041:23Suggested clip \u00b7 72 secondsQuick Example - Find the Area to the Right Of a Z-Score - YouTubeYouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If the backdoor criterion is satisfied for (X,Y), X and Y are deconfounded by the set of confounder variables. It is not necessary to control for any variables other than the confounders. The backdoor criterion is a sufficient but not necessary condition to find a set of variables Z to decounfound the analysis of the causal effect of X on y."}, {"text": "positive skew: The right tail is longer; the mass of the distribution is concentrated on the left of the figure. The distribution is said to be right-skewed, right-tailed, or skewed to the right, despite the fact that the curve itself appears to be skewed or leaning to the left; right instead refers to the right tail being drawn out and, often, the mean being skewed to the right of a typical center of the data. A right-skewed distribution usually appears as a left-leaning curve."}, {"text": "positive skew: The right tail is longer; the mass of the distribution is concentrated on the left of the figure. The distribution is said to be right-skewed, right-tailed, or skewed to the right, despite the fact that the curve itself appears to be skewed or leaning to the left; right instead refers to the right tail being drawn out and, often, the mean being skewed to the right of a typical center of the data. A right-skewed distribution usually appears as a left-leaning curve."}, {"text": "A fuzzy logic function represents a disjunction of constituents of minimum, where a constituent of minimum is a conjunction of variables of the current area greater than or equal to the function value in this area (to the right of the function value in the inequality, including the function value)."}, {"text": "dependent on the parameter \u03bc to be estimated, but with a standard normal distribution independent of the parameter \u03bc. Hence it is possible to find numbers \u2212z and z, independent of \u03bc, between which Z lies with probability 1 \u2212 \u03b1, a measure of how confident we want to be."}, {"text": "dependent on the parameter \u03bc to be estimated, but with a standard normal distribution independent of the parameter \u03bc. Hence it is possible to find numbers \u2212z and z, independent of \u03bc, between which Z lies with probability 1 \u2212 \u03b1, a measure of how confident we want to be."}, {"text": "For example, in the distribution of adult residents across US households, the skew is to the right. However, since the majority of cases is less than or equal to the mode, which is also the median, the mean sits in the heavier left tail. As a result, the rule of thumb that the mean is right of the median under right skew failed."}]}, {"question": "What is random process in communication", "positive_ctxs": [{"text": "\u2022 A random process is a time-varying function that assigns the outcome of a random experiment to each time instant: X(t). \u2022 For a fixed (sample path): a random process is a time varying function, e.g., a signal."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Later versions of CSP abandoned communication based on process names in favor of anonymous communication via channels, an approach also used in Milner's work on the calculus of communicating systems and the \u03c0-calculus."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "-dimensional Euclidean space, or more abstract spaces. Sometimes the term point process is not preferred, as historically the word process denoted an evolution of some system in time, so a point process is also called a random point field. There are different interpretations of a point process, such a random counting measure or a random set."}, {"text": "-dimensional Euclidean space, or more abstract spaces. Sometimes the term point process is not preferred, as historically the word process denoted an evolution of some system in time, so a point process is also called a random point field. There are different interpretations of a point process, such a random counting measure or a random set."}, {"text": "The process also has many applications and is the main stochastic process used in stochastic calculus. It plays a central role in quantitative finance, where it is used, for example, in the Black\u2013Scholes\u2013Merton model. The process is also used in different fields, including the majority of natural sciences as well as some branches of social sciences, as a mathematical model for various random phenomena."}, {"text": "The process also has many applications and is the main stochastic process used in stochastic calculus. It plays a central role in quantitative finance, where it is used, for example, in the Black\u2013Scholes\u2013Merton model. The process is also used in different fields, including the majority of natural sciences as well as some branches of social sciences, as a mathematical model for various random phenomena."}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}]}, {"question": "How long does it take to learn artificial intelligence", "positive_ctxs": [{"text": "Basically, it takes between 365 days (1 year) to 1,825 days (5 years) to learn artificial intelligence (assuming you put in 4 \u2013 0.5 learning hours a day). And how fast you learn also affects how long it takes you to be an expert."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Artificial general intelligence (AGI) is the hypothetical intelligence of a computer program that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, full AI,"}, {"text": "Artificial general intelligence (AGI) is the hypothetical intelligence of a computer program that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, full AI,"}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "Pattern theory, formulated by Ulf Grenander, is a mathematical formalism to describe knowledge of the world as patterns. It differs from other approaches to artificial intelligence in that it does not begin by prescribing algorithms and machinery to recognize and classify patterns; rather, it prescribes a vocabulary to articulate and recast the pattern concepts in precise language."}, {"text": "According to Russell and Norvig, \"Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis. \"In contrast to Searle, Ray Kurzweil uses the term \"strong AI\" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not."}, {"text": "According to Russell and Norvig, \"Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis. \"In contrast to Searle, Ray Kurzweil uses the term \"strong AI\" to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not."}]}, {"question": "What are some applications of the Autoregressive integrated moving average ARIMA model", "positive_ctxs": [{"text": "An autoregressive integrated moving average, or ARIMA, is a statistical analysis model that uses time series data to either better understand the data set or to predict future trends."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Combinations of these ideas produce autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) models. The autoregressive fractionally integrated moving average (ARFIMA) model generalizes the former three. Extensions of these classes to deal with vector-valued data are available under the heading of multivariate time-series models and sometimes the preceding acronyms are extended by including an initial \"V\" for \"vector\", as in VAR for vector autoregression."}, {"text": "Combinations of these ideas produce autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) models. The autoregressive fractionally integrated moving average (ARFIMA) model generalizes the former three. Extensions of these classes to deal with vector-valued data are available under the heading of multivariate time-series models and sometimes the preceding acronyms are extended by including an initial \"V\" for \"vector\", as in VAR for vector autoregression."}, {"text": "Combinations of these ideas produce autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) models. The autoregressive fractionally integrated moving average (ARFIMA) model generalizes the former three. Extensions of these classes to deal with vector-valued data are available under the heading of multivariate time-series models and sometimes the preceding acronyms are extended by including an initial \"V\" for \"vector\", as in VAR for vector autoregression."}, {"text": "In regression analysis using time series data, autocorrelation in a variable of interest is typically modeled either with an autoregressive model (AR), a moving average model (MA), their combination as an autoregressive-moving-average model (ARMA), or an extension of the latter called an autoregressive integrated moving average model (ARIMA). With multiple interrelated data series, vector autoregression (VAR) or its extensions are used."}, {"text": "In regression analysis using time series data, autocorrelation in a variable of interest is typically modeled either with an autoregressive model (AR), a moving average model (MA), their combination as an autoregressive-moving-average model (ARMA), or an extension of the latter called an autoregressive integrated moving average model (ARIMA). With multiple interrelated data series, vector autoregression (VAR) or its extensions are used."}, {"text": "This simple form of exponential smoothing is also known as an exponentially weighted moving average (EWMA). Technically it can also be classified as an autoregressive integrated moving average (ARIMA) (0,1,1) model with no constant term."}, {"text": "Models for time series data can have many forms and represent different stochastic processes. When modeling variations in the level of a process, three broad classes of practical importance are the autoregressive (AR) models, the integrated (I) models, and the moving average (MA) models. These three classes depend linearly on previous data points."}]}, {"question": "What is an example of a descriptive statistic", "positive_ctxs": [{"text": "All descriptive statistics are either measures of central tendency or measures of variability, also known as measures of dispersion. Range, quartiles, absolute deviation and variance are all examples of measures of variability. Consider the following data set: 5, 19, 24, 62, 91, 100."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "An important property of a test statistic is that its sampling distribution under the null hypothesis must be calculable, either exactly or approximately, which allows p-values to be calculated. A test statistic shares some of the same qualities of a descriptive statistic, and many statistics can be used as both test statistics and descriptive statistics. However, a test statistic is specifically intended for use in statistical testing, whereas the main quality of a descriptive statistic is that it is easily interpretable."}, {"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent."}, {"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent."}, {"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent."}, {"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent."}, {"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent."}, {"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features from a collection of information, while descriptive statistics (in the mass noun sense) is the process of using and analysing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics) by its aim to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent. This generally means that descriptive statistics, unlike inferential statistics, is not developed on the basis of probability theory, and are frequently non-parametric statistics."}]}, {"question": "What is tensor rank", "positive_ctxs": [{"text": "The total number of contravariant and covariant indices of a tensor. The rank of a tensor is independent of the number of dimensions. of the underlying space."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The rank of a tensor depends on the field over which the tensor is decomposed. It is known that some real tensors may admit a complex decomposition whose rank is strictly less than the rank of a real decomposition of the same tensor. As an example, consider the following real tensor"}, {"text": "Contrary to the case of matrices, the rank of a tensor is presently not understood well. It is known that the problem of computing the rank of a tensor is NP-hard. The only notable well-understood case consists of tensors in"}, {"text": ", whose rank can be obtained from the Kronecker\u2013Weierstrass normal form of the linear matrix pencil that the tensor represents. A simple polynomial-time algorithm exists for certifying that a tensor is of rank 1, namely the higher-order singular value decomposition."}, {"text": "The maximum rank that can be admitted by any of the tensors in a tensor space is unknown in general; even a conjecture about this maximum rank is missing. Presently, the best general upper bound states that the maximum rank"}, {"text": "When the first factor is very large with respect to the other factors in the tensor product, then the tensor space essentially behaves as a matrix space. The generic rank of tensors living in an unbalanced tensor spaces is known to equal"}, {"text": "Two types of tensor decompositions exist, which generalise the SVD to multi-way arrays. One of them decomposes a tensor into a sum of rank-1 tensors, which is called a tensor rank decomposition. The second type of decomposition computes the orthonormal subspaces associated with the different factors appearing in the tensor product of vector spaces in which the tensor lives."}, {"text": "Two types of tensor decompositions exist, which generalise the SVD to multi-way arrays. One of them decomposes a tensor into a sum of rank-1 tensors, which is called a tensor rank decomposition. The second type of decomposition computes the orthonormal subspaces associated with the different factors appearing in the tensor product of vector spaces in which the tensor lives."}]}, {"question": "What is meant by a tensor", "positive_ctxs": [{"text": "Tensors are simply mathematical objects that can be used to describe physical properties, just like scalars and vectors. In fact tensors are merely a generalisation of scalars and vectors; a scalar is a zero rank tensor, and a vector is a first rank tensor."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Here w is called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor. An example of a tensor density is the current density of electromagnetism."}, {"text": "Here w is called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor. An example of a tensor density is the current density of electromagnetism."}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "Thus, the TVP of a tensor to a P-dimensional vector consists of P projections from the tensor to a scalar. The projection from a tensor to a scalar is an elementary multilinear projection (EMP). In EMP, a tensor is projected to a point through N unit projection vectors."}, {"text": "This discussion of tensors so far assumes finite dimensionality of the spaces involved, where the spaces of tensors obtained by each of these constructions are naturally isomorphic. Constructions of spaces of tensors based on the tensor product and multilinear mappings can be generalized, essentially without modification, to vector bundles or coherent sheaves. For infinite-dimensional vector spaces, inequivalent topologies lead to inequivalent notions of tensor, and these various isomorphisms may or may not hold depending on what exactly is meant by a tensor (see topological tensor product)."}, {"text": "This discussion of tensors so far assumes finite dimensionality of the spaces involved, where the spaces of tensors obtained by each of these constructions are naturally isomorphic. Constructions of spaces of tensors based on the tensor product and multilinear mappings can be generalized, essentially without modification, to vector bundles or coherent sheaves. For infinite-dimensional vector spaces, inequivalent topologies lead to inequivalent notions of tensor, and these various isomorphisms may or may not hold depending on what exactly is meant by a tensor (see topological tensor product)."}, {"text": "will be a rank-1 tensor with probability zero, a rank-2 tensor with positive probability, and rank-3 with positive probability. On the other hand, a randomly sampled complex tensor of the same size will be a rank-1 tensor with probability zero, a rank-2 tensor with probability one, and a rank-3 tensor with probability zero. It is even known that the generic rank-3 real tensor in"}]}, {"question": "What are generative adversarial networks used for", "positive_ctxs": [{"text": "Generative adversarial nets can be applied in many fields from generating images to predicting drugs, so don't be afraid of experimenting with them. We believe they help in building a better future for machine learning."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Typical discriminative models include logistic regression (LR), conditional random fields (CRFs) (specified over an undirected graph), decision trees, and many others. Typical generative model approaches include naive Bayes classifiers, Gaussian mixture models, variational autoencoders, generative adversarial networks and others."}, {"text": "The term \"generative model\" is also used to describe models that generate instances of output variables in a way that has no clear relationship to probability distributions over potential samples of input variables. Generative adversarial networks are examples of this class of generative models, and are judged primarily by the similarity of particular outputs to potential inputs. Such models are not classifiers."}, {"text": "Other people had similar ideas but did not develop them similarly. An idea involving adversarial networks was published in a 2010 blog post by Olli Niemitalo. This idea was never implemented and did not involve stochasticity in the generator and thus was not a generative model."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "Other people had similar ideas but did not develop them similarly. An idea involving adversarial networks was published in a 2010 blog post by Olli Niemitalo. This idea was never implemented and did not involve stochasticity in the generator and thus was not a generative model."}, {"text": "Other people had similar ideas but did not develop them similarly. An idea involving adversarial networks was published in a 2010 blog post by Olli Niemitalo. This idea was never implemented and did not involve stochasticity in the generator and thus was not a generative model."}]}, {"question": "What is meant by likelihood", "positive_ctxs": [{"text": "the state of being likely or probable; probability. a probability or chance of something: There is a strong likelihood of his being elected."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "But the original use of the phrase \"complete Archimedean field\" was by David Hilbert, who meant still something else by it. He meant that the real numbers form the largest Archimedean field in the sense that every other Archimedean field is a subfield of"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of a probability distribution by maximizing a likelihood function, so that under the assumed statistical model the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.If the likelihood function is differentiable, the derivative test for determining maxima can be applied."}]}, {"question": "What is canonical discriminant analysis", "positive_ctxs": [{"text": "Canonical discriminant analysis is a dimension-reduction technique related to principal component analysis and canonical correlation. This maximal multiple correlation is called the first canonical correlation. The coefficients of the linear combination are the canonical coefficients or canonical weights."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Several variants of CA are available, including detrended correspondence analysis (DCA) and canonical correspondence analysis (CCA). The extension of correspondence analysis to many categorical variables is called multiple correspondence analysis. An adaptation of correspondence analysis to the problem of discrimination based upon qualitative variables (i.e., the equivalent of discriminant analysis for qualitative data) is called discriminant correspondence analysis or barycentric discriminant analysis."}, {"text": "Several variants of CA are available, including detrended correspondence analysis (DCA) and canonical correspondence analysis (CCA). The extension of correspondence analysis to many categorical variables is called multiple correspondence analysis. An adaptation of correspondence analysis to the problem of discrimination based upon qualitative variables (i.e., the equivalent of discriminant analysis for qualitative data) is called discriminant correspondence analysis or barycentric discriminant analysis."}, {"text": "The mapping from a high-dimensional vector space to a set of lower dimensional vector spaces is a multilinear projection. When observations are retained in the same organizational structure as the sensor provides them; as matrices or higher order tensors, their representations are computed by performing N multiple linear projections.Multilinear subspace learning algorithms are higher-order generalizations of linear subspace learning methods such as principal component analysis (PCA), independent component analysis (ICA), linear discriminant analysis (LDA) and canonical correlation analysis (CCA)."}, {"text": "Canonical factor analysis seeks factors which have the highest canonical correlation with the observed variables. Canonical factor analysis is unaffected by arbitrary rescaling of the data."}, {"text": "Unlike logistic regression, discriminant analysis can be used with small sample sizes. It has been shown that when sample sizes are equal, and homogeneity of variance/covariance holds, discriminant analysis is more accurate. Despite all these advantages, logistic regression has none-the-less become the common choice, since the assumptions of discriminant analysis are rarely met."}, {"text": "Unlike logistic regression, discriminant analysis can be used with small sample sizes. It has been shown that when sample sizes are equal, and homogeneity of variance/covariance holds, discriminant analysis is more accurate. Despite all these advantages, logistic regression has none-the-less become the common choice, since the assumptions of discriminant analysis are rarely met."}, {"text": "Unlike logistic regression, discriminant analysis can be used with small sample sizes. It has been shown that when sample sizes are equal, and homogeneity of variance/covariance holds, discriminant analysis is more accurate. Despite all these advantages, logistic regression has none-the-less become the common choice, since the assumptions of discriminant analysis are rarely met."}]}, {"question": "How do you calculate entropy of information", "positive_ctxs": [{"text": "Entropy can be calculated for a random variable X with k in K discrete states as follows: H(X) = -sum(each k in K p(k) * log(p(k)))"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "The conditional quantum entropy is an entropy measure used in quantum information theory. It is a generalization of the conditional entropy of classical information theory."}, {"text": "This maximal entropy of logb(n) is effectively attained by a source alphabet having a uniform probability distribution: uncertainty is maximal when all possible events are equiprobable.The entropy or the amount of information revealed by evaluating (X,Y) (that is, evaluating X and Y simultaneously) is equal to the information revealed by conducting two consecutive experiments: first evaluating the value of Y, then revealing the value of X given that you know the value of Y. This may be written as:"}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}]}, {"question": "Where is Word2Vec used", "positive_ctxs": [{"text": "Word2Vec can be used to get actionable metrics from thousands of customers reviews. Businesses don't have enough time and tools to analyze survey responses and act on them thereon. This leads to loss of ROI and brand value. Word embeddings prove invaluable in such cases."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Where there is causation, there is correlation, but also a sequence in time from cause to effect, a plausible mechanism, and sometimes common and intermediate causes. While correlation is often used when inferring causation because it is a necessary condition, it is not a sufficient condition."}, {"text": "Where there is causation, there is correlation, but also a sequence in time from cause to effect, a plausible mechanism, and sometimes common and intermediate causes. While correlation is often used when inferring causation because it is a necessary condition, it is not a sufficient condition."}, {"text": "Where the summation is again implied. When the (1, 1)-tensor is interpreted as a linear map, this operation is known as the trace."}, {"text": "Where the summation is again implied. When the (1, 1)-tensor is interpreted as a linear map, this operation is known as the trace."}, {"text": "An extension of word vectors for creating a dense vector representation of unstructured radiology reports has been proposed by Banerjee et al. One of the biggest challenges with Word2Vec is how to handle unknown or out-of-vocabulary (OOV) words and morphologically similar words. This can particularly be an issue in domains like medicine where synonyms and related words can be used depending on the preferred style of radiologist, and words may have been used infrequently in a large corpus."}, {"text": "An extension of word vectors for creating a dense vector representation of unstructured radiology reports has been proposed by Banerjee et al. One of the biggest challenges with Word2Vec is how to handle unknown or out-of-vocabulary (OOV) words and morphologically similar words. This can particularly be an issue in domains like medicine where synonyms and related words can be used depending on the preferred style of radiologist, and words may have been used infrequently in a large corpus."}, {"text": "An extension of word vectors for creating a dense vector representation of unstructured radiology reports has been proposed by Banerjee et al. One of the biggest challenges with Word2Vec is how to handle unknown or out-of-vocabulary (OOV) words and morphologically similar words. This can particularly be an issue in domains like medicine where synonyms and related words can be used depending on the preferred style of radiologist, and words may have been used infrequently in a large corpus."}]}, {"question": "How do you interpret Q values", "positive_ctxs": [{"text": "This is the \u201cq-value.\u201d A p-value of 5% means that 5% of all tests will result in false positives. A q-value of 5% means that 5% of significant results will result in false positives."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influence a child's cognitive functioning?). Then you can still do research, but it is not causal, it is correlational."}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}]}, {"question": "What does it mean when data is positively skewed", "positive_ctxs": [{"text": "In statistics, a positively skewed (or right-skewed) distribution is a type of distribution in which most values are clustered around the left tail of the distribution while the right tail of the distribution is longer."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Similarly, we can make the sequence positively skewed by adding a value far above the mean, which is probably a positive outlier, e.g. (49, 50, 51, 60), where the mean is 52.5, and the median is 50.5."}, {"text": "Similarly, we can make the sequence positively skewed by adding a value far above the mean, which is probably a positive outlier, e.g. (49, 50, 51, 60), where the mean is 52.5, and the median is 50.5."}, {"text": "bigger than x, it does not necessarily mean you have made it plausible that it is smaller or equal than x; alternatively you may just have done a lousy measurement with low accuracy. Confirming the null hypothesis two-sided would amount to positively proving it is bigger or equal than 0 AND to positively proving it is smaller or equal than 0; this is something for which you need infinite accuracy as well as exactly zero effect neither of which normally are realistic. Also measurements will never indicate a non-zero probability of exactly zero difference.)"}, {"text": "A frequency distribution is said to be skewed when its mean and median are significantly different, or more generally when it is asymmetric. The kurtosis of a frequency distribution is a measure of the proportion of extreme values (outliers), which appear at either end of the histogram. If the distribution is more outlier-prone than the normal distribution it is said to be leptokurtic; if less outlier-prone it is said to be platykurtic."}, {"text": "A frequency distribution is said to be skewed when its mean and median are significantly different, or more generally when it is asymmetric. The kurtosis of a frequency distribution is a measure of the proportion of extreme values (outliers), which appear at either end of the histogram. If the distribution is more outlier-prone than the normal distribution it is said to be leptokurtic; if less outlier-prone it is said to be platykurtic."}, {"text": "drops to 0 even for small k. Lacking such definition, the element is \"random\" in a negative sense. But it is positively \"probabilistically random\" only when function"}, {"text": "This is an approximation to the mean for a moderately skewed distribution. It is used in hydrocarbon exploration and is defined as"}]}, {"question": "Does normalization improve performance machine learning", "positive_ctxs": [{"text": "Normalization is a technique often applied as part of data preparation for machine learning. The goal of normalization is to change the values of numeric columns in the dataset to a common scale, without distorting differences in the ranges of values. For machine learning, every dataset does not require normalization."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Feature engineering is the process of using domain knowledge to extract features from raw data via data mining techniques. These features can be used to improve the performance of machine learning algorithms. Feature engineering can be considered as applied machine learning itself."}, {"text": "A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms."}, {"text": "A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms."}, {"text": "A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms."}, {"text": "A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms."}, {"text": "A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms."}, {"text": "A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms."}]}, {"question": "How do you convert dimensional analysis", "positive_ctxs": [{"text": "2:316:15Suggested clip \u00b7 118 secondsUnit Conversion the Easy Way (Dimensional Analysis) - YouTubeYouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "In finance, economics, and accounting, dimensional analysis is most commonly referred to in terms of the distinction between stocks and flows. More generally, dimensional analysis is used in interpreting various financial ratios, economics ratios, and accounting ratios."}, {"text": "The origins of dimensional analysis have been disputed by historians.The first written application of dimensional analysis has been credited to an article of Fran\u00e7ois Daviet at the Turin Academy of Science. Daviet had the master Lagrange as teacher."}, {"text": "Siano's orientational analysis is compatible with the conventional conception of angular quantities as being dimensionless, and within orientational analysis, the radian may still be considered a dimensionless unit. The orientational analysis of a quantity equation is carried out separately from the ordinary dimensional analysis, yielding information that supplements the dimensional analysis."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "What is multi task learning in machine learning", "positive_ctxs": [{"text": "Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. In the classification context, MTL aims to improve the performance of multiple classification tasks by learning them jointly."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "Is MSE equal to variance", "positive_ctxs": [{"text": "The MSE is a measure of the quality of an estimator\u2014it is always non-negative, and values closer to zero are better. For an unbiased estimator, the MSE is the variance of the estimator. Like the variance, MSE has the same units of measurement as the square of the quantity being estimated."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Both linear regression techniques such as analysis of variance estimate the MSE as part of the analysis and use the estimated MSE to determine the statistical significance of the factors or predictors under study. The goal of experimental design is to construct experiments in such a way that when the observations are analyzed, the MSE is close to zero relative to the magnitude of at least one of the estimated treatment effects."}, {"text": "Both linear regression techniques such as analysis of variance estimate the MSE as part of the analysis and use the estimated MSE to determine the statistical significance of the factors or predictors under study. The goal of experimental design is to construct experiments in such a way that when the observations are analyzed, the MSE is close to zero relative to the magnitude of at least one of the estimated treatment effects."}, {"text": "But in real modeling case, MSE could be described as the addition of model variance, model bias, and irreducible uncertainty. According to the relationship, the MSE of the estimators could be simply used for the efficiency comparison, which includes the information of estimator variance and bias. This is called MSE criterion."}, {"text": "But in real modeling case, MSE could be described as the addition of model variance, model bias, and irreducible uncertainty. According to the relationship, the MSE of the estimators could be simply used for the efficiency comparison, which includes the information of estimator variance and bias. This is called MSE criterion."}, {"text": "The MSE is the second moment (about the origin) of the error, and thus incorporates both the variance of the estimator (how widely spread the estimates are from one data sample to another) and its bias (how far off the average estimated value is from the true value). For an unbiased estimator, the MSE is the variance of the estimator. Like the variance, MSE has the same units of measurement as the square of the quantity being estimated."}, {"text": "The MSE is the second moment (about the origin) of the error, and thus incorporates both the variance of the estimator (how widely spread the estimates are from one data sample to another) and its bias (how far off the average estimated value is from the true value). For an unbiased estimator, the MSE is the variance of the estimator. Like the variance, MSE has the same units of measurement as the square of the quantity being estimated."}, {"text": "Minimizing MSE is a key criterion in selecting estimators: see minimum mean-square error. Among unbiased estimators, minimizing the MSE is equivalent to minimizing the variance, and the estimator that does this is the minimum variance unbiased estimator. However, a biased estimator may have lower MSE; see estimator bias."}]}, {"question": "How do you solve the Wilcoxon rank sum test", "positive_ctxs": [{"text": "4:026:15Suggested clip \u00b7 93 secondsFinding the Test Statistic for a Wilcoxon Rank Sum Test in - YouTubeYouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The logrank statistic can be used when observations are censored. If censored observations are not present in the data then the Wilcoxon rank sum test is appropriate."}, {"text": "The logrank statistic can be used when observations are censored. If censored observations are not present in the data then the Wilcoxon rank sum test is appropriate."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "In a single paper in 1945, Frank Wilcoxon proposed both the one-sample signed rank and the two-sample rank sum test, in a test of significance with a point null-hypothesis against its complementary alternative (that is, equal versus not equal). However, he only tabulated a few points for the equal-sample size case in that paper (though in a later paper he gave larger tables)."}, {"text": "In a single paper in 1945, Frank Wilcoxon proposed both the one-sample signed rank and the two-sample rank sum test, in a test of significance with a point null-hypothesis against its complementary alternative (that is, equal versus not equal). However, he only tabulated a few points for the equal-sample size case in that paper (though in a later paper he gave larger tables)."}, {"text": "The Mann\u2013Whitney U test / Wilcoxon rank-sum test is not the same as the Wilcoxon signed-rank test, although both are nonparametric and involve summation of ranks. The Mann\u2013Whitney U test is applied to independent samples. The Wilcoxon signed-rank test is applied to matched or dependent samples."}, {"text": "The Mann\u2013Whitney U test / Wilcoxon rank-sum test is not the same as the Wilcoxon signed-rank test, although both are nonparametric and involve summation of ranks. The Mann\u2013Whitney U test is applied to independent samples. The Wilcoxon signed-rank test is applied to matched or dependent samples."}]}, {"question": "What is logically equivalent to P or Q", "positive_ctxs": [{"text": "if p is a statement variable, the negation of p is \"not p\", denoted by ~p. If p is true, then ~p is false. Conjunction: if p and q are statement variables, the conjunction of p and q is \"p and q\", denoted p q.(p q) ~(p q) p xor qExclusive Orp ~(~p)Double Negation"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": ".That is, \"If not-Q, then not-P\", or, more clearly, \"If Q is not the case, then P is not the case.\" Using our example, this is rendered as \"If Socrates is not human, then Socrates is not a man.\" This statement is said to be contraposed to the original and is logically equivalent to it."}, {"text": "The form shows that inference from P implies Q to the negation of Q implies the negation of P is a valid argument."}, {"text": "Given two points P and Q on C, let s(P,Q) be the arc length of the portion of the curve between P and Q and let d(P,Q) denote the length of the line segment from P to Q. The curvature of C at P is given by the limit"}, {"text": "This is only false when P is true and Q is false. Therefore, we can reduce this proposition to the statement \"False when P and not-Q\" (i.e. \"True when it is not the case that P and not-Q\"):"}, {"text": "The compound p \u2192 q is also logically equivalent to \u00acp \u2228 q (either not p, or q (or both)), and to \u00acq \u2192 \u00acp (if not q then not p). It is, however, not equivalent to \u00acp \u2192 \u00acq, which is instead equivalent to q \u2192 p."}, {"text": "The root cause of such a logic error is sometimes failure to realize that just because P is a possible condition for Q, P may not be the only condition for Q, i.e. Q may follow from another condition as well.Affirming the consequent can also result from overgeneralizing the experience of many statements having true converses. If P and Q are \"equivalent\" statements, i.e."}, {"text": "In classical logic, p \u2192 q is logically equivalent to \u00ac(p \u2227 \u00acq) and, by De Morgan's Law, logically equivalent to \u00acp \u2228 q. Whereas in minimal logic (and therefore also intuitionistic logic), p \u2192 q only logically entails \u00ac(p \u2227 \u00acq); and in intuitionistic logic (but not minimal logic), \u00acp \u2228 q entails p \u2192 q."}]}, {"question": "How do you know when to use a permutation instead of a combination", "positive_ctxs": [{"text": "The difference between combinations and permutations is ordering. With permutations we care about the order of the elements, whereas with combinations we don't. For example, say your locker \u201ccombo\u201d is 5432. If you enter 4325 into your locker it won't open because it is a different ordering (aka permutation)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "A random permutation is a random ordering of a set of objects, that is, a permutation-valued random variable. The use of random permutations is often fundamental to fields that use randomized algorithms such as coding theory, cryptography, and simulation. A good example of a random permutation is the shuffling of a deck of cards: this is ideally a random permutation of the 52 cards."}, {"text": "It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample"}, {"text": "It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample"}, {"text": "It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample"}, {"text": "It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample"}, {"text": "It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample"}]}, {"question": "What are two methods researchers use to avoid experimenter bias", "positive_ctxs": [{"text": "to safeguard against the researcher problem of experimenter bias, researchers employ blind observers, single and double blind study, and placebos. to control for ethnocentrism, they use cross cultural sampling."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In science research, experimenter bias occurs when experimenter expectancies regarding study results bias the research outcome. Examples of experimenter bias include conscious or unconscious influences on subject behavior including creation of demand characteristics that influence subjects, and altered or selective recording of experimental results themselves."}, {"text": "In 1985, Anton Nederhof compiled a list of techniques and methodological strategies for researchers to use to mitigate the effects of social desirability bias in their studies. Most of these strategies involve deceiving the subject, or are related to the way questions in surveys and questionnaires are presented to those in a study. A condensed list of seven of the strategies are listed below:"}, {"text": "Another way that researchers attempt to reduce demand characteristics is by being as neutral as possible, or training those conducting the experiment to be as neutral as possible. For example, studies show that extensive one-on-one contact between the experimenter and the participant makes it more difficult to be neutral, and go on to suggest that this type of interaction should be limited when designing an experiment. Another way to prevent demand characteristics is to use blinded experiments with placebos or control groups."}, {"text": "Some practitioners have tried to estimate and impute these missing sensitive categorisations in order to allow bias mitigation, for example building systems to infer ethnicity from names, however this can introduce other forms of bias if not undertaken with care. Machine learning researchers have drawn upon cryptographic privacy-enhancing technologies such as secure multi-party computation to propose methods whereby algorithmic bias can be assessed or mitigated without these data ever being available to modellers in cleartext.Algorithmic bias does not only include protected categories, but can also concerns characteristics less easily observable or codifiable, such as political viewpoints. In these cases, there is rarely an easily accessible or non-controversial ground truth, and removing the bias from such a system is more difficult."}, {"text": "Some practitioners have tried to estimate and impute these missing sensitive categorisations in order to allow bias mitigation, for example building systems to infer ethnicity from names, however this can introduce other forms of bias if not undertaken with care. Machine learning researchers have drawn upon cryptographic privacy-enhancing technologies such as secure multi-party computation to propose methods whereby algorithmic bias can be assessed or mitigated without these data ever being available to modellers in cleartext.Algorithmic bias does not only include protected categories, but can also concerns characteristics less easily observable or codifiable, such as political viewpoints. In these cases, there is rarely an easily accessible or non-controversial ground truth, and removing the bias from such a system is more difficult."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "Funding bias refers to the tendency of a scientific study to support the interests of the study's financial sponsor. This phenomenon is recognized sufficiently that researchers undertake studies to examine bias in past published studies. It can be caused by any or all of: a conscious or subconscious sense of obligation of researchers towards their employers, misconduct or malpractice, publication bias, or reporting bias."}]}, {"question": "Which assumption does omitted variable bias violate", "positive_ctxs": [{"text": "In ordinary least squares, the relevant assumption of the classical linear regression model is that the error term is uncorrelated with the regressors. The presence of omitted-variable bias violates this particular assumption. The violation causes the OLS estimator to be biased and inconsistent."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The second term after the equal sign is the omitted-variable bias in this case, which is non-zero if the omitted variable z is correlated with any of the included variables in the matrix X (that is, if X\u2032Z does not equal a vector of zeroes). Note that the bias is equal to the weighted portion of zi which is \"explained\" by xi."}, {"text": "The problem with this approach is that it would violate the assumption of independence, and thus could bias our results. This is known as atomistic fallacy. Another way to analyze the data using traditional statistical approaches is to aggregate individual level variables to higher-order variables and then to conduct an analysis on this higher level."}, {"text": "is unobserved, and correlated with at least one of the independent variables, then it will cause omitted variable bias in a standard OLS regression. However, panel data methods, such as the fixed effects estimator or alternatively, the first-difference estimator can be used to control for it."}, {"text": "If included in a regression, it can improve the fit of the model. If it is excluded from the regression and if it has a non-zero covariance with one or more of the independent variables of interest, its omission will bias the regression's result for the effect of that independent variable of interest. This effect is called confounding or omitted variable bias; in these situations, design changes and/or controlling for a variable statistical control is necessary."}, {"text": "If included in a regression, it can improve the fit of the model. If it is excluded from the regression and if it has a non-zero covariance with one or more of the independent variables of interest, its omission will bias the regression's result for the effect of that independent variable of interest. This effect is called confounding or omitted variable bias; in these situations, design changes and/or controlling for a variable statistical control is necessary."}, {"text": "If included in a regression, it can improve the fit of the model. If it is excluded from the regression and if it has a non-zero covariance with one or more of the independent variables of interest, its omission will bias the regression's result for the effect of that independent variable of interest. This effect is called confounding or omitted variable bias; in these situations, design changes and/or controlling for a variable statistical control is necessary."}, {"text": "If included in a regression, it can improve the fit of the model. If it is excluded from the regression and if it has a non-zero covariance with one or more of the independent variables of interest, its omission will bias the regression's result for the effect of that independent variable of interest. This effect is called confounding or omitted variable bias; in these situations, design changes and/or controlling for a variable statistical control is necessary."}]}, {"question": "How do you do multiple logistic regression", "positive_ctxs": [{"text": "0:012:32Suggested clip \u00b7 101 secondsMultiple Logistic Regression - YouTubeYouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Of those that survive, at what rate will they die or fail? Can multiple causes of death or failure be taken into account? How do particular circumstances or characteristics increase or decrease the probability of survival?"}, {"text": "Of those that survive, at what rate will they die or fail? Can multiple causes of death or failure be taken into account? How do particular circumstances or characteristics increase or decrease the probability of survival?"}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "Although some common statistical packages (e.g. SPSS) do provide likelihood ratio test statistics, without this computationally intensive test it would be more difficult to assess the contribution of individual predictors in the multiple logistic regression case. To assess the contribution of individual predictors one can enter the predictors hierarchically, comparing each new model with the previous to determine the contribution of each predictor."}, {"text": "Although some common statistical packages (e.g. SPSS) do provide likelihood ratio test statistics, without this computationally intensive test it would be more difficult to assess the contribution of individual predictors in the multiple logistic regression case. To assess the contribution of individual predictors one can enter the predictors hierarchically, comparing each new model with the previous to determine the contribution of each predictor."}]}, {"question": "What does AGI stand for in artificial intelligence", "positive_ctxs": [{"text": "Artificial General Intelligence"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Artificial general intelligence (AGI) is the hypothetical intelligence of a computer program that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, full AI,"}, {"text": "Artificial general intelligence (AGI) is the hypothetical intelligence of a computer program that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, full AI,"}, {"text": "MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences."}, {"text": "MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences."}, {"text": "As of August 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive, at all. At one extreme, AI pioneer Herbert A. Simon speculated in 1965: \"machines will be capable, within twenty years, of doing any work a man can do\"."}, {"text": "As of August 2020, AGI remains speculative as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive, at all. At one extreme, AI pioneer Herbert A. Simon speculated in 1965: \"machines will be capable, within twenty years, of doing any work a man can do\"."}, {"text": "For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all and to avoid any mention of \"human level\" artificial intelligence for fear of being labeled \"wild-eyed dreamer[s].\""}]}, {"question": "How do you do cross tabulation", "positive_ctxs": [{"text": "Cross tabulationCross tabulations require that the two data columns be adjacent. You can drag columns by selecting them, and moving the cursor so it's immediately between two columns. Once you have the columns adjacent, select both of them including the variable names all the way to the bottom."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Non-exhaustive cross validation methods do not compute all ways of splitting the original sample. Those methods are approximations of leave-p-out cross-validation."}, {"text": "Non-exhaustive cross validation methods do not compute all ways of splitting the original sample. Those methods are approximations of leave-p-out cross-validation."}, {"text": "Non-exhaustive cross validation methods do not compute all ways of splitting the original sample. Those methods are approximations of leave-p-out cross-validation."}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}]}, {"question": "How is factor analysis different from multiple regression", "positive_ctxs": [{"text": "Factor analysis is as much of a \"test\" as multiple regression (or statistical tests in general) in that it is used to reveal hidden or latent relationships/groupings in one's dataset. Multiple regression takes data points in some n-dimensional space and finds the best fit line."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Very often, in questionnaires, the questions are structured in several issues. In the statistical analysis it is necessary to take into account this structure. This is the aim of multiple factor analysis which balances the different issues (i.e."}, {"text": "Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made."}, {"text": "Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made."}, {"text": "Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made."}, {"text": "Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made."}, {"text": "Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made."}, {"text": "How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect."}]}, {"question": "What is augmented reality used for", "positive_ctxs": [{"text": "When someone talks about AR, they are referring to technology that overlays information and virtual objects on real-world scenes in real-time. It uses the existing environment and adds information to it to make a new artificial environment."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Many definitions of augmented reality only define it as overlaying the information. This is basically what a head-up display does; however, practically speaking, augmented reality is expected to include registration and tracking between the superimposed perceptions, sensations, information, data, and images and some portion of the real world."}, {"text": "With the improvement of technology and computers, augmented reality is going to lead to a drastic change on ones perspective of the real world. According to Time, in about 15\u201320 years it is predicted that augmented reality and virtual reality are going to become the primary use for computer interactions. Computers are improving at a very fast rate, leading to new ways to improve other technology."}, {"text": "Many computer vision methods of augmented reality are inherited from visual odometry. An augogram is a computer generated image that is used to create AR. Augography is the science and software practice of making augograms for AR."}, {"text": "In this way, augmented reality alters one's ongoing perception of a real-world environment, whereas virtual reality completely replaces the user's real-world environment with a simulated one. Augmented reality is related to two largely synonymous terms: mixed reality and computer-mediated reality."}, {"text": "In virtual reality (VR), the users' perception of reality is completely based on virtual information. In augmented reality (AR) the user is provided with additional computer generated information that enhances their perception of reality. For example, in architecture, VR can be used to create a walk-through simulation of the inside of a new building; and AR can be used to show a building's structures and systems super-imposed on a real-life view."}, {"text": "Reflets is a novel augmented reality display dedicated to musical performances where the audience acts as a 3D display by revealing virtual content on stage, which can also be used for 3D musical interaction and collaboration."}, {"text": "On the other hand, in VR the surrounding environment is completely virtual. A demonstration of how AR layers objects onto the real world can be seen with augmented reality games. WallaMe is an augmented reality game application that allows users to hide messages in real environments, utilizing geolocation technology in order to enable users to hide messages wherever they may wish in the world."}]}, {"question": "What are the advantages of Bayesian networks", "positive_ctxs": [{"text": "They provide a natural way to handle missing data, they allow combination of data with domain knowledge, they facilitate learning about causal relationships between variables, they provide a method for avoiding overfitting of data (Heckerman, 1995), they can show good prediction accuracy even with rather small sample"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "\"A Hierarchical Bayesian Model of Invariant Pattern Recognition in the Visual Cortex\". CiteSeerX 10.1.1.132.6744. a paper describing earlier pre-HTM Bayesian model by the co-founder of Numenta. This is the first model of memory-prediction framework that uses Bayesian networks and all the above models are based on these initial ideas."}, {"text": "What is the underlying framework used to represent knowledge? Semantic networks were one of the first knowledge representation primitives. Also, data structures and algorithms for general fast search."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "At about the same time, Roth proved that exact inference in Bayesian networks is in fact #P-complete (and thus as hard as counting the number of satisfying assignments of a conjunctive normal form formula (CNF) and that approximate inference within a factor 2n1\u2212\u025b for every \u025b > 0, even for Bayesian networks with restricted architecture, is NP-hard.In practical terms, these complexity results suggested that while Bayesian networks were rich representations for AI and machine learning applications, their use in large real-world applications would need to be tempered by either topological structural constraints, such as na\u00efve Bayes networks, or by restrictions on the conditional probabilities. The bounded variance algorithm was the first provable fast approximation algorithm to efficiently approximate probabilistic inference in Bayesian networks with guarantees on the error approximation. This powerful algorithm required the minor restriction on the conditional probabilities of the Bayesian network to be bounded away from zero and one by 1/p(n) where p(n) was any polynomial on the number of nodes in the network n."}, {"text": "At about the same time, Roth proved that exact inference in Bayesian networks is in fact #P-complete (and thus as hard as counting the number of satisfying assignments of a conjunctive normal form formula (CNF) and that approximate inference within a factor 2n1\u2212\u025b for every \u025b > 0, even for Bayesian networks with restricted architecture, is NP-hard.In practical terms, these complexity results suggested that while Bayesian networks were rich representations for AI and machine learning applications, their use in large real-world applications would need to be tempered by either topological structural constraints, such as na\u00efve Bayes networks, or by restrictions on the conditional probabilities. The bounded variance algorithm was the first provable fast approximation algorithm to efficiently approximate probabilistic inference in Bayesian networks with guarantees on the error approximation. This powerful algorithm required the minor restriction on the conditional probabilities of the Bayesian network to be bounded away from zero and one by 1/p(n) where p(n) was any polynomial on the number of nodes in the network n."}, {"text": "A Markov network or MRF is similar to a Bayesian network in its representation of dependencies; the differences being that Bayesian networks are directed and acyclic, whereas Markov networks are undirected and may be cyclic. Thus, a Markov network can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies); on the other hand, it can't represent certain dependencies that a Bayesian network can (such as induced dependencies). The underlying graph of a Markov random field may be finite or infinite."}]}, {"question": "How do neural networks reduce loss", "positive_ctxs": [{"text": "Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on. If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "A fundamental objection is that ANNs do not sufficiently reflect neuronal function. Backpropagation is a critical step, although no such mechanism exists in biological neural networks. How information is coded by real neurons is not known."}, {"text": "A fundamental objection is that ANNs do not sufficiently reflect neuronal function. Backpropagation is a critical step, although no such mechanism exists in biological neural networks. How information is coded by real neurons is not known."}, {"text": "Recurrent neural networks are generally considered the best neural network architectures for time series forecasting (and sequence modeling in general), but recent studies show that convolutional networks can perform comparably or even better. Dilated convolutions might enable one-dimensional convolutional neural networks to effectively learn time series dependences. Convolutions can be implemented more efficiently than RNN-based solutions, and they do not suffer from vanishing (or exploding) gradients."}, {"text": "Recurrent neural networks are generally considered the best neural network architectures for time series forecasting (and sequence modeling in general), but recent studies show that convolutional networks can perform comparably or even better. Dilated convolutions might enable one-dimensional convolutional neural networks to effectively learn time series dependences. Convolutions can be implemented more efficiently than RNN-based solutions, and they do not suffer from vanishing (or exploding) gradients."}, {"text": "Recurrent neural networks are generally considered the best neural network architectures for time series forecasting (and sequence modeling in general), but recent studies show that convolutional networks can perform comparably or even better. Dilated convolutions might enable one-dimensional convolutional neural networks to effectively learn time series dependences. Convolutions can be implemented more efficiently than RNN-based solutions, and they do not suffer from vanishing (or exploding) gradients."}, {"text": "Recurrent neural networks are generally considered the best neural network architectures for time series forecasting (and sequence modeling in general), but recent studies show that convolutional networks can perform comparably or even better. Dilated convolutions might enable one-dimensional convolutional neural networks to effectively learn time series dependences. Convolutions can be implemented more efficiently than RNN-based solutions, and they do not suffer from vanishing (or exploding) gradients."}]}, {"question": "What is the difference between likelihood function and posterior probability", "positive_ctxs": [{"text": "To put simply, likelihood is \"the likelihood of \u03b8 having generated D\" and posterior is essentially \"the likelihood of \u03b8 having generated D\" further multiplied by the prior distribution of \u03b8."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In Bayesian probability theory, if the posterior distributions p(\u03b8 | x) are in the same probability distribution family as the prior probability distribution p(\u03b8), the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function p(x | \u03b8). For example, the Gaussian family is conjugate to itself (or self-conjugate) with respect to a Gaussian likelihood function: if the likelihood function is Gaussian, choosing a Gaussian prior over the mean will ensure that the posterior distribution is also Gaussian. This means that the Gaussian distribution is a conjugate prior for the likelihood that is also Gaussian."}, {"text": "The power of the test is the probability that the test will find a statistically significant difference between men and women, as a function of the size of the true difference between those two populations."}, {"text": "By contrast, likelihood functions do not need to be integrated, and a likelihood function that is uniformly 1 corresponds to the absence of data (all models are equally likely, given no data): Bayes' rule multiplies a prior by the likelihood, and an empty product is just the constant likelihood 1. However, without starting with a prior probability distribution, one does not end up getting a posterior probability distribution, and thus cannot integrate or compute expected values or loss. See Likelihood function \u00a7 Non-integrability for details."}, {"text": "By contrast, likelihood functions do not need to be integrated, and a likelihood function that is uniformly 1 corresponds to the absence of data (all models are equally likely, given no data): Bayes' rule multiplies a prior by the likelihood, and an empty product is just the constant likelihood 1. However, without starting with a prior probability distribution, one does not end up getting a posterior probability distribution, and thus cannot integrate or compute expected values or loss. See Likelihood function \u00a7 Non-integrability for details."}, {"text": "By contrast, likelihood functions do not need to be integrated, and a likelihood function that is uniformly 1 corresponds to the absence of data (all models are equally likely, given no data): Bayes' rule multiplies a prior by the likelihood, and an empty product is just the constant likelihood 1. However, without starting with a prior probability distribution, one does not end up getting a posterior probability distribution, and thus cannot integrate or compute expected values or loss. See Likelihood function \u00a7 Non-integrability for details."}, {"text": "In the Rasch model, the probability of a specified response (e.g. right/wrong answer) is modeled as a function of person and item parameters. Specifically, in the original Rasch model, the probability of a correct response is modeled as a logistic function of the difference between the person and item parameter."}, {"text": "In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of a probability distribution by maximizing a likelihood function, so that under the assumed statistical model the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.If the likelihood function is differentiable, the derivative test for determining maxima can be applied."}]}, {"question": "Why and what to do when neural networks perform poorly on the training set", "positive_ctxs": [{"text": "Adding more training data.Reducing parameters. We have too many neurons in our hidden layers or too many layers. Let's remove some layers, or reduce the number of hidden neurons.Increase regularization. Either by increasing our. for L1/L2 weight regularization. We can also use dropout the technique."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It implicitly bases the bin sizes on the range of the data and can perform poorly if n < 30, because the number of bins will be small\u2014less than seven\u2014and unlikely to show trends in the data well. It may also perform poorly if the data are not normally distributed."}, {"text": "It implicitly bases the bin sizes on the range of the data and can perform poorly if n < 30, because the number of bins will be small\u2014less than seven\u2014and unlikely to show trends in the data well. It may also perform poorly if the data are not normally distributed."}, {"text": "Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network. Moreover, recent emphasis on the explainability of AI has contributed towards the development of methods, notably those based on attention mechanisms, for visualizing and explaining learned neural networks. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering generic principles that allow a learning machine to be successful."}, {"text": "Recurrent neural networks are generally considered the best neural network architectures for time series forecasting (and sequence modeling in general), but recent studies show that convolutional networks can perform comparably or even better. Dilated convolutions might enable one-dimensional convolutional neural networks to effectively learn time series dependences. Convolutions can be implemented more efficiently than RNN-based solutions, and they do not suffer from vanishing (or exploding) gradients."}, {"text": "Recurrent neural networks are generally considered the best neural network architectures for time series forecasting (and sequence modeling in general), but recent studies show that convolutional networks can perform comparably or even better. Dilated convolutions might enable one-dimensional convolutional neural networks to effectively learn time series dependences. Convolutions can be implemented more efficiently than RNN-based solutions, and they do not suffer from vanishing (or exploding) gradients."}, {"text": "Recurrent neural networks are generally considered the best neural network architectures for time series forecasting (and sequence modeling in general), but recent studies show that convolutional networks can perform comparably or even better. Dilated convolutions might enable one-dimensional convolutional neural networks to effectively learn time series dependences. Convolutions can be implemented more efficiently than RNN-based solutions, and they do not suffer from vanishing (or exploding) gradients."}, {"text": "Recurrent neural networks are generally considered the best neural network architectures for time series forecasting (and sequence modeling in general), but recent studies show that convolutional networks can perform comparably or even better. Dilated convolutions might enable one-dimensional convolutional neural networks to effectively learn time series dependences. Convolutions can be implemented more efficiently than RNN-based solutions, and they do not suffer from vanishing (or exploding) gradients."}]}, {"question": "What is learning and how do we learn", "positive_ctxs": [{"text": "Learning involves far more than thinking: it involves the whole personality - senses, feelings, intuition, beliefs, values and will. Learning occurs when we are able to: Gain a mental or physical grasp of the subject. Make sense of a subject, event or feeling by interpreting it into our own words or actions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The learning element uses feedback from the \"critic\" on how the agent is doing and determines how the performance element, or \"actor\", should be modified to do better in the future. The performance element is what we have previously considered to be the entire agent: it takes in percepts and decides on actions."}, {"text": "What we do know, however, is that according to our working assumptions the estimate we have is a poor indication of the true value of the revenue and is likely to be substantially wrong. So, methodologically speaking, we have to display the true value at a distance from its estimate. In fact, it would be even more enlightening to display a number of possible true values ."}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}]}, {"question": "What is Expectation Maximization clustering", "positive_ctxs": [{"text": "EM is an iterative method which alternates between two steps, expectation (E) and maximization (M). For clustering, EM makes use of the finite Gaussian mixtures model and estimates a set of parameters iteratively until a desired convergence value is achieved."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "That any one Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure in the same Chance and Expectation at a fair Lay. If I expect a or b, and have an equal chance of gaining them, my Expectation is worth (a+b)/2."}, {"text": "That any one Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure in the same Chance and Expectation at a fair Lay. If I expect a or b, and have an equal chance of gaining them, my Expectation is worth (a+b)/2."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "are vectors of lower and upper bounds on the design variables. Maximization problems can be converted to minimization problems by multiplying the objective by -1. Constraints can be reversed in a similar manner."}]}, {"question": "What does multiple logistic regression mean", "positive_ctxs": [{"text": "Simple logistic regression analysis refers to the regression application with one dichotomous outcome and one independent variable; multiple logistic regression analysis applies when there is a single dichotomous outcome and more than one independent variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In a binary logistic regression model, the dependent variable has two levels (categorical). Outputs with more than two values are modeled by multinomial logistic regression and, if the multiple categories are ordered, by ordinal logistic regression (for example the proportional odds ordinal logistic model). The logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier."}, {"text": "In a binary logistic regression model, the dependent variable has two levels (categorical). Outputs with more than two values are modeled by multinomial logistic regression and, if the multiple categories are ordered, by ordinal logistic regression (for example the proportional odds ordinal logistic model). The logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier."}, {"text": "In a binary logistic regression model, the dependent variable has two levels (categorical). Outputs with more than two values are modeled by multinomial logistic regression and, if the multiple categories are ordered, by ordinal logistic regression (for example the proportional odds ordinal logistic model). The logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier."}, {"text": "Logistic regression and other log-linear models are also commonly used in machine learning. A generalisation of the logistic function to multiple inputs is the softmax activation function, used in multinomial logistic regression."}, {"text": "Logistic regression and other log-linear models are also commonly used in machine learning. A generalisation of the logistic function to multiple inputs is the softmax activation function, used in multinomial logistic regression."}, {"text": "Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis."}, {"text": "Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis."}]}, {"question": "What is tokenization NLP", "positive_ctxs": [{"text": "Tokenization is one of the most common tasks when it comes to working with text data. Tokenization is essentially splitting a phrase, sentence, paragraph, or an entire text document into smaller units, such as individual words or terms. Each of these smaller units are called tokens."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}, {"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}, {"text": "Given the size of many NLPs arising from a direct method, it may appear somewhat counter-intuitive that solving the nonlinear optimization problem is easier than solving the boundary-value problem. It is, however, the fact that the NLP is easier to solve than the boundary-value problem. The reason for the relative ease of computation, particularly of a direct collocation method, is that the NLP is sparse and many well-known software programs exist (e.g., SNOPT) to solve large sparse NLPs."}]}, {"question": "How do you do a clipping path", "positive_ctxs": [{"text": "Click on the triangle-shaped icon located at the top right corner of the panel, and then choose \"Save Path\". Next, select \"Clipping Path\" from the same drop-down menu. A new dialog box will appear with a variety of clipping path settings. Make sure your path is selected, and then click OK."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "Is reinforcement learning AI", "positive_ctxs": [{"text": "It's a form of machine learning and therefore a branch of artificial intelligence. Depending on the complexity of the problem, reinforcement learning algorithms can keep adapting to the environment over time if necessary in order to maximize the reward in the long-term."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google DeepMind increased attention to deep reinforcement learning or end-to-end reinforcement learning."}, {"text": "This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google DeepMind increased attention to deep reinforcement learning or end-to-end reinforcement learning."}, {"text": "Associative reinforcement learning tasks combine facets of stochastic learning automata tasks and supervised learning pattern classification tasks. In associative reinforcement learning tasks, the learning system interacts in a closed loop with its environment."}, {"text": "Associative reinforcement learning tasks combine facets of stochastic learning automata tasks and supervised learning pattern classification tasks. In associative reinforcement learning tasks, the learning system interacts in a closed loop with its environment."}, {"text": "The three major learning paradigms are supervised learning, unsupervised learning and reinforcement learning. They each correspond to a particular learning task"}, {"text": "The three major learning paradigms are supervised learning, unsupervised learning and reinforcement learning. They each correspond to a particular learning task"}, {"text": "Deep reinforcement learning (deep RL) is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of state space."}]}, {"question": "How do you know if an event is independent or dependent", "positive_ctxs": [{"text": "Independent EventsTwo events A and B are said to be independent if the fact that one event has occurred does not affect the probability that the other event will occur.If whether or not one event occurs does affect the probability that the other event will occur, then the two events are said to be dependent."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute."}, {"text": "Economist Paul Krugman agrees mostly with the Rawlsian approach in that he would like to \"create the society each of us would want if we didn\u2019t know in advance who we\u2019d be\". Krugman elaborated: \"If you admit that life is unfair, and that there's only so much you can do about that at the starting line, then you can try to ameliorate the consequences of that unfairness\"."}, {"text": "Two random variables, X and Y, are said to be independent if any event defined in terms of X is independent of any event defined in terms of Y. Formally, they generate independent \u03c3-algebras, where two \u03c3-algebras G and H, which are subsets of F are said to be independent if any element of G is independent of any element of H."}, {"text": "The following question was posed to Jeff Hawkins in September 2011 with regard to cortical learning algorithms: \"How do you know if the changes you are making to the model are good or not?\" To which Jeff's response was \"There are two categories for the answer: one is to look at neuroscience, and the other is methods for machine intelligence. In the neuroscience realm, there are many predictions that we can make, and those can be tested."}, {"text": "Suppose the police officers then stop a driver at random to administer a breathalyzer test. It indicates that the driver is drunk. We assume you do not know anything else about them."}]}, {"question": "How might a statistical test be statistically significant but not practical", "positive_ctxs": [{"text": "While statistical significance relates to whether an effect exists, practical significance refers to the magnitude of the effect. However, no statistical test can tell you whether the effect is large enough to be important in your field of study. An effect of 4 points or less is too small to care about."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Researchers focusing solely on whether their results are statistically significant might report findings that are not substantive and not replicable. There is also a difference between statistical significance and practical significance. A study that is found to be statistically significant may not necessarily be practically significant."}, {"text": "Researchers focusing solely on whether their results are statistically significant might report findings that are not substantive and not replicable. There is also a difference between statistical significance and practical significance. A study that is found to be statistically significant may not necessarily be practically significant."}, {"text": "This shortcoming is especially concerning given that even a small error in blinding can produce a statistically significant result in the absence of any real difference between test groups when a study is sufficiently powered (i.e. statistical significance is not robust to bias). As such, many statistically significant results in randomized controlled trials may be caused by error in blinding."}, {"text": "This shortcoming is especially concerning given that even a small error in blinding can produce a statistically significant result in the absence of any real difference between test groups when a study is sufficiently powered (i.e. statistical significance is not robust to bias). As such, many statistically significant results in randomized controlled trials may be caused by error in blinding."}, {"text": "ANOVA is a form of statistical hypothesis testing heavily used in the analysis of experimental data. A test result (calculated from the null hypothesis and the sample) is called statistically significant if it is deemed unlikely to have occurred by chance, assuming the truth of the null hypothesis. A statistically significant result, when a probability (p-value) is less than a pre-specified threshold (significance level), justifies the rejection of the null hypothesis, but only if the a priori probability of the null hypothesis is not high."}, {"text": "ANOVA is a form of statistical hypothesis testing heavily used in the analysis of experimental data. A test result (calculated from the null hypothesis and the sample) is called statistically significant if it is deemed unlikely to have occurred by chance, assuming the truth of the null hypothesis. A statistically significant result, when a probability (p-value) is less than a pre-specified threshold (significance level), justifies the rejection of the null hypothesis, but only if the a priori probability of the null hypothesis is not high."}, {"text": "ANOVA is a form of statistical hypothesis testing heavily used in the analysis of experimental data. A test result (calculated from the null hypothesis and the sample) is called statistically significant if it is deemed unlikely to have occurred by chance, assuming the truth of the null hypothesis. A statistically significant result, when a probability (p-value) is less than a pre-specified threshold (significance level), justifies the rejection of the null hypothesis, but only if the a priori probability of the null hypothesis is not high."}]}, {"question": "Why should I learn deep learning", "positive_ctxs": [{"text": "When there is lack of domain understanding for feature introspection , Deep Learning techniques outshines others as you have to worry less about feature engineering . Deep Learning really shines when it comes to complex problems such as image classification, natural language processing, and speech recognition."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Another class of model-free deep reinforcement learning algorithms rely on dynamic programming, inspired by temporal difference learning and Q-learning. In discrete action spaces, these algorithms usually learn a neural network Q-function"}, {"text": "Others point out that deep learning should be looked at as a step towards realizing strong AI, not as an all-encompassing solution. Despite the power of deep learning methods, they still lack much of the functionality needed for realizing this goal entirely. Research psychologist Gary Marcus noted:\"Realistically, deep learning is only part of the larger challenge of building intelligent machines."}, {"text": "Others point out that deep learning should be looked at as a step towards realizing strong AI, not as an all-encompassing solution. Despite the power of deep learning methods, they still lack much of the functionality needed for realizing this goal entirely. Research psychologist Gary Marcus noted:\"Realistically, deep learning is only part of the larger challenge of building intelligent machines."}, {"text": "Others point out that deep learning should be looked at as a step towards realizing strong AI, not as an all-encompassing solution. Despite the power of deep learning methods, they still lack much of the functionality needed for realizing this goal entirely. Research psychologist Gary Marcus noted:\"Realistically, deep learning is only part of the larger challenge of building intelligent machines."}, {"text": "Others point out that deep learning should be looked at as a step towards realizing strong AI, not as an all-encompassing solution. Despite the power of deep learning methods, they still lack much of the functionality needed for realizing this goal entirely. Research psychologist Gary Marcus noted:\"Realistically, deep learning is only part of the larger challenge of building intelligent machines."}, {"text": "Others point out that deep learning should be looked at as a step towards realizing strong AI, not as an all-encompassing solution. Despite the power of deep learning methods, they still lack much of the functionality needed for realizing this goal entirely. Research psychologist Gary Marcus noted:\"Realistically, deep learning is only part of the larger challenge of building intelligent machines."}, {"text": "Others point out that deep learning should be looked at as a step towards realizing strong AI, not as an all-encompassing solution. Despite the power of deep learning methods, they still lack much of the functionality needed for realizing this goal entirely. Research psychologist Gary Marcus noted:\"Realistically, deep learning is only part of the larger challenge of building intelligent machines."}]}, {"question": "Is linear discriminant analysis supervised or unsupervised", "positive_ctxs": [{"text": "Linear discriminant analysis (LDA) is one of commonly used supervised subspace learning methods. The objective optimization is in both the ratio trace and the trace ratio forms, forming a complete framework of a new approach to jointly clustering and unsupervised subspace learning."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis."}, {"text": "Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis."}]}, {"question": "What does Communalities mean in factor analysis", "positive_ctxs": [{"text": "Communalities \u2013 This is the proportion of each variable's variance that can be explained by the factors (e.g., the underlying latent continua). It is also noted as h2 and can be defined as the sum of squared factor loadings for the variables. They are the reproduced variances from the factors that you have extracted."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "LDA is also closely related to principal component analysis (PCA) and factor analysis in that they both look for linear combinations of variables which best explain the data. LDA explicitly attempts to model the difference between the classes of data. PCA, in contrast, does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities."}, {"text": "LDA is also closely related to principal component analysis (PCA) and factor analysis in that they both look for linear combinations of variables which best explain the data. LDA explicitly attempts to model the difference between the classes of data. PCA, in contrast, does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities."}, {"text": "LDA is also closely related to principal component analysis (PCA) and factor analysis in that they both look for linear combinations of variables which best explain the data. LDA explicitly attempts to model the difference between the classes of data. PCA, in contrast, does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities."}, {"text": "LDA is also closely related to principal component analysis (PCA) and factor analysis in that they both look for linear combinations of variables which best explain the data. LDA explicitly attempts to model the difference between the classes of data. PCA, in contrast, does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities."}, {"text": "LDA is also closely related to principal component analysis (PCA) and factor analysis in that they both look for linear combinations of variables which best explain the data. LDA explicitly attempts to model the difference between the classes of data. PCA, in contrast, does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities."}, {"text": "In factor analysis and latent trait analysis the latent variables are treated as continuous normally distributed variables, and in latent profile analysis and latent class analysis as from a multinomial distribution. The manifest variables in factor analysis and latent profile analysis are continuous and in most cases, their conditional distribution given the latent variables is assumed to be normal. In latent trait analysis and latent class analysis, the manifest variables are discrete."}, {"text": "In factor analysis and latent trait analysis the latent variables are treated as continuous normally distributed variables, and in latent profile analysis and latent class analysis as from a multinomial distribution. The manifest variables in factor analysis and latent profile analysis are continuous and in most cases, their conditional distribution given the latent variables is assumed to be normal. In latent trait analysis and latent class analysis, the manifest variables are discrete."}]}, {"question": "What is nonlinearity in neural networks", "positive_ctxs": [{"text": "Non-linearity in neural networks simply mean that the output at any unit cannot be reproduced from a linear function of the input."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Logistic functions are often used in neural networks to introduce nonlinearity in the model or to clamp signals to within a specified interval. A popular neural net element computes a linear combination of its input signals, and applies a bounded logistic function as the activation function to the result; this model can be seen as a \"smoothed\" variant of the classical threshold neuron."}, {"text": "Logistic functions are often used in neural networks to introduce nonlinearity in the model or to clamp signals to within a specified interval. A popular neural net element computes a linear combination of its input signals, and applies a bounded logistic function as the activation function to the result; this model can be seen as a \"smoothed\" variant of the classical threshold neuron."}, {"text": "Ans and Rousset (1997) also proposed a two-network artificial neural architecture with memory self-refreshing that overcomes catastrophic interference when sequential learning tasks are carried out in distributed networks trained by backpropagation. The principle is to interleave, at the time when new external patterns are learned, those to-be-learned new external patterns with internally generated pseudopatterns, or 'pseudo-memories', that reflect the previously learned information. What mainly distinguishes this model from those that use classical pseudorehearsal in feedforward multilayer networks is a reverberating process that is used for generating pseudopatterns."}, {"text": "Along with rising interest in neural networks beginning in the mid 1980s, interest grew in deep reinforcement learning where a neural network is used to represent policies or value functions. As in such a system, the entire decision making process from sensors to motors in a robot or agent involves a single layered neural network, it is sometimes called end-to-end reinforcement learning. One of the first successful applications of reinforcement learning with neural networks was TD-Gammon, a computer program developed in 1992 for playing backgammon."}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}]}, {"question": "Difference between stochastic gradient descent and online learning", "positive_ctxs": [{"text": "Stochastic Gradient Descent: you would randomly select one of those training samples at each iteration to update your coefficients. Online Gradient Descent: you would use the \"most recent\" sample at each iteration. There is no stochasticity as you deterministically select your sample."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}]}, {"question": "How do you create a bivariate normal distribution", "positive_ctxs": [{"text": "The first method involves the conditional distribution of a random variable X2 given X1. Therefore, a bivariate normal distribution can be simulated by drawing a random variable from the marginal normal distribution and then drawing a second random variable from the conditional normal distribution."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For pairs from an uncorrelated bivariate normal distribution, the sampling distribution of a certain function of Pearson's correlation coefficient follows Student's t-distribution with degrees of freedom n \u2212 2. Specifically, if the underlying variables are white and have a bivariate normal distribution, the variable"}, {"text": "For pairs from an uncorrelated bivariate normal distribution, the sampling distribution of a certain function of Pearson's correlation coefficient follows Student's t-distribution with degrees of freedom n \u2212 2. Specifically, if the underlying variables are white and have a bivariate normal distribution, the variable"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "\"The radius around the true mean in a bivariate normal random variable, re-written in polar coordinates (radius and angle), follows a Hoyt distribution. \"In one dimension the probability of finding a sample of the normal distribution in the interval"}, {"text": "The sample correlation coefficient r is not an unbiased estimate of \u03c1. For data that follows a bivariate normal distribution, the expectation E[r] for the sample correlation coefficient r of a normal bivariate is"}, {"text": "The sample correlation coefficient r is not an unbiased estimate of \u03c1. For data that follows a bivariate normal distribution, the expectation E[r] for the sample correlation coefficient r of a normal bivariate is"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}]}, {"question": "What does a resistor do in a crossover", "positive_ctxs": [{"text": "In a crossover network, resistors are usually used in combination with other components to control either impedance magnitudes or the relative levels between different drivers in a system."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In medicine, a crossover study or crossover trial is a longitudinal study in which subjects receive a sequence of different treatments (or exposures). While crossover studies can be observational studies, many important crossover studies are controlled experiments, which are discussed in this article. Crossover designs are common for experiments in many scientific disciplines, for example psychology, pharmaceutical science, and medicine."}, {"text": "A popular repeated-measures is the crossover study. A crossover study is a longitudinal study in which subjects receive a sequence of different treatments (or exposures). While crossover studies can be observational studies, many important crossover studies are controlled experiments."}, {"text": ", and suppose the line is terminated with a matched resistor (so that all of the pulse energy is delivered to the resistor and none is reflected back). By Ohm's law, the power delivered to the resistor at time"}, {"text": "A crossover trial has a repeated measures design in which each patient is assigned to a sequence of two or more treatments, of which one may be a standard treatment or a placebo."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "One way to prove this result is using the connection to electrical networks. Take a map of the city and place a one ohm resistor on every block. Now measure the \"resistance between a point and infinity.\""}]}, {"question": "What does a regression equation tell you", "positive_ctxs": [{"text": "A regression equation is used in stats to find out what relationship, if any, exists between sets of data. For example, if you measure a child's height every year you might find that they grow about 3 inches a year. That trend (growing three inches a year) can be modeled with a regression equation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "You are allowed to select k of these n boxes all at once and break them open simultaneously, gaining access to k keys. What is the probability that using these keys you can open all n boxes, where you use a found key to open the box it belongs to and repeat."}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influence a child's cognitive functioning?). Then you can still do research, but it is not causal, it is correlational."}, {"text": "Logic will not undertake to inform you what kind of experiments you ought to make in order best to determine the acceleration of gravity, or the value of the Ohm; but it will tell you how to proceed to form a plan of experimentation.[....] Unfortunately practice generally precedes theory, and it is the usual fate of mankind to get things done in some boggling way first, and find out afterward how they could have been done much more easily and perfectly."}, {"text": "Generate N random numbers from a categorical distribution of size n and probabilities pi for i= 1= to n. These tell you which of the Fi each of the N values will come from. Denote by mi the quantity of random numbers assigned to the ith category."}, {"text": "Generate N random numbers from a categorical distribution of size n and probabilities pi for i= 1= to n. These tell you which of the Fi each of the N values will come from. Denote by mi the quantity of random numbers assigned to the ith category."}]}, {"question": "What is the difference between optimal control theory and reinforcement learning", "positive_ctxs": [{"text": "Optimal control focuses on a subset of problems, but solves these problems very well, and has a rich history. RL can be thought of as a way of generalizing or extending ideas from optimal control to non-traditional control problems."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Due to its generality, reinforcement learning is studied in many disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, and statistics. In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment."}, {"text": "Due to its generality, reinforcement learning is studied in many disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, and statistics. In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment."}, {"text": "The environment is typically stated in the form of a Markov decision process (MDP), because many reinforcement learning algorithms for this context use dynamic programming techniques. The main difference between the classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the MDP and they target large MDPs where exact methods become infeasible."}, {"text": "The environment is typically stated in the form of a Markov decision process (MDP), because many reinforcement learning algorithms for this context use dynamic programming techniques. The main difference between the classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the MDP and they target large MDPs where exact methods become infeasible."}, {"text": "The examples thus far have shown continuous time systems and control solutions. In fact, as optimal control solutions are now often implemented digitally, contemporary control theory is now primarily concerned with discrete time systems and solutions. The Theory of Consistent Approximations provides conditions under which solutions to a series of increasingly accurate discretized optimal control problem converge to the solution of the original, continuous-time problem."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}]}, {"question": "Why is gradient boosting better than random forest", "positive_ctxs": [{"text": "Random forests perform well for multi-class object detection and bioinformatics, which tends to have a lot of statistical noise. Gradient Boosting performs well when you have unbalanced data such as in real time risk assessment."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "They are often relatively inaccurate. Many other predictors perform better with similar data. This can be remedied by replacing a single decision tree with a random forest of decision trees, but a random forest is not as easy to interpret as a single decision tree."}, {"text": "They are often relatively inaccurate. Many other predictors perform better with similar data. This can be remedied by replacing a single decision tree with a random forest of decision trees, but a random forest is not as easy to interpret as a single decision tree."}, {"text": "They are often relatively inaccurate. Many other predictors perform better with similar data. This can be remedied by replacing a single decision tree with a random forest of decision trees, but a random forest is not as easy to interpret as a single decision tree."}, {"text": "They are often relatively inaccurate. Many other predictors perform better with similar data. This can be remedied by replacing a single decision tree with a random forest of decision trees, but a random forest is not as easy to interpret as a single decision tree."}, {"text": "As part of their construction, random forest predictors naturally lead to a dissimilarity measure among the observations. One can also define a random forest dissimilarity measure between unlabeled data: the idea is to construct a random forest predictor that distinguishes the \u201cobserved\u201d data from suitably generated synthetic data."}, {"text": "Like other boosting methods, gradient boosting combines weak \"learners\" into a single strong learner in an iterative fashion. It is easiest to explain in the least-squares regression setting, where the goal is to \"teach\" a model"}, {"text": "Like other boosting methods, gradient boosting combines weak \"learners\" into a single strong learner in an iterative fashion. It is easiest to explain in the least-squares regression setting, where the goal is to \"teach\" a model"}]}, {"question": "What is the difference between supervised and unsupervised learning", "positive_ctxs": [{"text": "In a supervised learning model, the algorithm learns on a labeled dataset, providing an answer key that the algorithm can use to evaluate its accuracy on training data. An unsupervised model, in contrast, provides unlabeled data that the algorithm tries to make sense of by extracting features and patterns on its own."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The goals of learning are understanding and prediction. Learning falls into many categories, including supervised learning, unsupervised learning, online learning, and reinforcement learning. From the perspective of statistical learning theory, supervised learning is best understood."}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "Deep learning is being successfully applied to financial fraud detection and anti-money laundering. \"Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events\". The solution leverages both supervised learning techniques, such as the classification of suspicious transactions, and unsupervised learning, e.g."}, {"text": "Deep learning is being successfully applied to financial fraud detection and anti-money laundering. \"Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events\". The solution leverages both supervised learning techniques, such as the classification of suspicious transactions, and unsupervised learning, e.g."}, {"text": "Deep learning is being successfully applied to financial fraud detection and anti-money laundering. \"Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events\". The solution leverages both supervised learning techniques, such as the classification of suspicious transactions, and unsupervised learning, e.g."}]}, {"question": "What is PD and LGD", "positive_ctxs": [{"text": "PD analysis is a method used by larger institutions to calculate their expected loss. A PD is assigned to each risk measure and represents as a percentage the likelihood of default. LGD represents the amount unrecovered by the lender after selling the underlying asset if a borrower defaults on a loan."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The primary symptoms are the results of decreased stimulation of the motor cortex by the basal ganglia, normally caused by the insufficient formation and action of dopamine, which is produced in the dopaminergic neurons of the brain. Secondary symptoms may include high level cognitive dysfunction and subtle language problems. PD is both chronic and progressive."}, {"text": "What is the sample size. How many units must be collected for the experiment to be generalisable and have enough power?"}, {"text": "Ronald J. Brachman; What IS-A is and isn't. An Analysis of Taxonomic Links in Semantic Networks; IEEE Computer, 16 (10); October 1983"}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}]}, {"question": "Why instance based learning is called as lazy learning", "positive_ctxs": [{"text": "Instance-based methods are sometimes referred to as lazy learning methods because they delay processing until a new instance must be classified. The nearest neighbors of an instance are defined in terms of Euclidean distance."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "is also typically taken to be fixed but unknown, most collective-assumption based methods focus on learning this distribution, as in the single-instance version.While the collective assumption weights every instance with equal importance, Foulds extended the collective assumption to incorporate instance weights. The weighted collective assumption is then that"}, {"text": "The main advantage gained in employing an eager learning method, such as an artificial neural network, is that the target function will be approximated globally during training, thus requiring much less space than using a lazy learning system. Eager learning systems also deal much better with noise in the training data. Eager learning is an example of offline learning, in which post-training queries to the system have no effect on the system itself, and thus the same query to the system will always produce the same result."}, {"text": "Because language support for sorting is more ubiquitous, the simplistic approach of sorting followed by indexing is preferred in many environments despite its disadvantage in speed. Indeed, for lazy languages, this simplistic approach can even achieve the best complexity possible for the k smallest/greatest sorted (with maximum/minimum as a special case) if the sort is lazy enough."}, {"text": "In the terminology of machine learning, classification is considered an instance of supervised learning, i.e., learning where a training set of correctly identified observations is available. The corresponding unsupervised procedure is known as clustering, and involves grouping data into categories based on some measure of inherent similarity or distance."}, {"text": "In the terminology of machine learning, classification is considered an instance of supervised learning, i.e., learning where a training set of correctly identified observations is available. The corresponding unsupervised procedure is known as clustering, and involves grouping data into categories based on some measure of inherent similarity or distance."}, {"text": "In the terminology of machine learning, classification is considered an instance of supervised learning, i.e., learning where a training set of correctly identified observations is available. The corresponding unsupervised procedure is known as clustering, and involves grouping data into categories based on some measure of inherent similarity or distance."}, {"text": "In the terminology of machine learning, classification is considered an instance of supervised learning, i.e., learning where a training set of correctly identified observations is available. The corresponding unsupervised procedure is known as clustering, and involves grouping data into categories based on some measure of inherent similarity or distance."}]}, {"question": "How do you get rid of experimenter bias", "positive_ctxs": [{"text": "Other ways of avoiding experimenter's bias include standardizing methods and procedures to minimize differences in experimenter-subject interactions; using blinded observers or confederates as assistants, further distancing the experimenter from the subjects; and separating the roles of investigator and experimenter."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In science research, experimenter bias occurs when experimenter expectancies regarding study results bias the research outcome. Examples of experimenter bias include conscious or unconscious influences on subject behavior including creation of demand characteristics that influence subjects, and altered or selective recording of experimental results themselves."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Addition, multiplication, and exponentiation are three of the most fundamental arithmetic operations. Addition, the simplest of these, is undone by subtraction: when you add 5 to x to get x + 5, to reverse this operation you need to subtract 5 from x + 5. Multiplication, the next-simplest operation, is undone by division: if you multiply x by 5 to get 5x, you then can divide 5x by 5 to return to the original expression x. Logarithms also undo a fundamental arithmetic operation, exponentiation."}, {"text": "Unusual amounts, above or below predetermined thresholds, may also be reviewed. There are several types of data cleaning, that are dependent upon the type of data in the set; this could be phone numbers, email addresses, employers, or other values. Quantitative data methods for outlier detection, can be used to get rid of data that appears to have a higher likelihood of being input incorrectly."}, {"text": "This expression means that y is equal to the power that you would raise b to, to get x. This operation undoes exponentiation because the logarithm of x tells you the exponent that the base has been raised to."}, {"text": "Futurama- Bender is a good example of sapient t AI, throughout many episodes, you will see Bender get angry, sad, or other emotions. Bender also having a mind of his own."}]}, {"question": "Why data should be normally distributed", "positive_ctxs": [{"text": "The normal distribution is the most important probability distribution in statistics because it fits many natural phenomena. For example, heights, blood pressure, measurement error, and IQ scores follow the normal distribution. It is also known as the Gaussian distribution and the bell curve."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "is normally distributed with mean \u03bc and variance \u03c32/n. Moreover, it is possible to show that these two random variables (the normally distributed one Z and the chi-squared-distributed one V) are independent."}, {"text": "If an improper prior proportional to \u03c3\u22122 is placed over the variance, the t-distribution also arises. This is the case regardless of whether the mean of the normally distributed variable is known, is unknown distributed according to a conjugate normally distributed prior, or is unknown distributed according to an improper constant prior."}, {"text": ", which is not bounded. At each stage, the average will be normally distributed (as the average of a set of normally distributed variables). The variance of the sum is equal to the sum of the variances, which is asymptotic to"}, {"text": "Critics of this approach argue that control charts should not be used when their underlying assumptions are violated, such as when process data is neither normally distributed nor binomially (or Poisson) distributed. Such processes are not in control and should be improved before the application of control charts. Additionally, application of the charts in the presence of such deviations increases the type I and type II error rates of the control charts, and may make the chart of little practical use."}, {"text": "(In some instances, frequentist statistics can work around this problem. For example, confidence intervals and prediction intervals in frequentist statistics when constructed from a normal distribution with unknown mean and variance are constructed using a Student's t-distribution. This correctly estimates the variance, due to the fact that (1) the average of normally distributed random variables is also normally distributed; (2) the predictive distribution of a normally distributed data point with unknown mean and variance, using conjugate or uninformative priors, has a student's t-distribution."}, {"text": "must have multivariate normal distribution. However, a pair of jointly normally distributed variables need not be independent (would only be so if uncorrelated,"}, {"text": "If the population is not normally distributed, the sample mean is nonetheless approximately normally distributed if n is large and \u03c32/n < +\u221e. This is a consequence of the central limit theorem."}]}, {"question": "What is a histogram in image processing", "positive_ctxs": [{"text": "An image histogram is a type of histogram that acts as a graphical representation of the tonal distribution in a digital image. It plots the number of pixels for each tonal value. The vertical axis represents the size of the area (total number of pixels) that is captured in each one of these zones."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The magnitude and direction calculations for the gradient are done for every pixel in a neighboring region around the keypoint in the Gaussian-blurred image L. An orientation histogram with 36 bins is formed, with each bin covering 10 degrees. Each sample in the neighboring window added to a histogram bin is weighted by its gradient magnitude and by a Gaussian-weighted circular window with a"}, {"text": "A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram Mi of a histogram mj is defined as:"}, {"text": "A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram Mi of a histogram mj is defined as:"}, {"text": "The motion analysis processing can in the simplest case be to detect motion, i.e., find the points in the image where something is moving. More complex types of processing can be to track a specific object in the image over time, to group points that belong to the same rigid object that is moving in the scene, or to determine the magnitude and direction of the motion of every point in the image. The information that is produced is often related to a specific image in the sequence, corresponding to a specific time-point, but then depends also on the neighboring images."}, {"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}, {"text": "What kind of graph is used depends on the application. For example, in natural language processing, linear chain CRFs are popular, which implement sequential dependencies in the predictions. In image processing the graph typically connects locations to nearby and/or similar locations to enforce that they receive similar predictions."}, {"text": "The median filter is a non-linear digital filtering technique, often used to remove noise from an image or signal. Such noise reduction is a typical pre-processing step to improve the results of later processing (for example, edge detection on an image). Median filtering is very widely used in digital image processing because, under certain conditions, it preserves edges while removing noise (but see the discussion below), also having applications in signal processing."}]}, {"question": "What are the three conditions for constructing a confidence interval for the population mean", "positive_ctxs": [{"text": "conditions\u2014Random, Normal, and Independent\u2014is. important when constructing a confidence interval."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This means that the rule for constructing the confidence interval should make as much use of the information in the data-set as possible. Recall that one could throw away half of a dataset and still be able to derive a valid confidence interval. One way of assessing optimality is by the length of the interval so that a rule for constructing a confidence interval is judged better than another if it leads to intervals whose lengths are typically shorter."}, {"text": "This means that the rule for constructing the confidence interval should make as much use of the information in the data-set as possible. Recall that one could throw away half of a dataset and still be able to derive a valid confidence interval. One way of assessing optimality is by the length of the interval so that a rule for constructing a confidence interval is judged better than another if it leads to intervals whose lengths are typically shorter."}, {"text": "In many applications, confidence intervals that have exactly the required confidence level are hard to construct. But practically useful intervals can still be found: the rule for constructing the interval may be accepted as providing a confidence interval at level"}, {"text": "In many applications, confidence intervals that have exactly the required confidence level are hard to construct. But practically useful intervals can still be found: the rule for constructing the interval may be accepted as providing a confidence interval at level"}, {"text": "This example assumes that the samples are drawn from a normal distribution. The basic procedure for calculating a confidence interval for a population mean is as follows:"}, {"text": "This example assumes that the samples are drawn from a normal distribution. The basic procedure for calculating a confidence interval for a population mean is as follows:"}, {"text": "Often they are expressed as 95% confidence intervals. Formally, a 95% confidence interval for a value is a range where, if the sampling and analysis were repeated under the same conditions (yielding a different dataset), the interval would include the true (population) value in 95% of all possible cases. This does not imply that the probability that the true value is in the confidence interval is 95%."}]}, {"question": "Which type of data is often Modelled using regression trees", "positive_ctxs": [{"text": "Regression trees are used in Statistics, Data Mining and Machine learning. It is a very important and powerful technique when it comes to predictive analysis [5] . The goal is to predict the value of target variable on the basis of several input attributes that act as nodes of the regression tree."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In regression problems, case resampling refers to the simple scheme of resampling individual cases \u2013 often rows of a data set. For regression problems, as long as the data set is fairly large, this simple scheme is often acceptable. However, the method is open to criticism."}, {"text": "will generally be small but not necessarily zero. Which of these regimes is more relevant depends on the specific data set at hand."}, {"text": "Regularized trees naturally handle numerical and categorical features, interactions and nonlinearities. They are invariant to attribute scales (units) and insensitive to outliers, and thus, require little data preprocessing such as normalization. Regularized random forest (RRF) is one type of regularized trees."}, {"text": "Regularized trees naturally handle numerical and categorical features, interactions and nonlinearities. They are invariant to attribute scales (units) and insensitive to outliers, and thus, require little data preprocessing such as normalization. Regularized random forest (RRF) is one type of regularized trees."}, {"text": "By itself, a regression is simply a calculation using the data. In order to interpret the output of a regression as a meaningful statistical quantity that measures real-world relationships, researchers often rely on a number of classical assumptions."}, {"text": "By itself, a regression is simply a calculation using the data. In order to interpret the output of a regression as a meaningful statistical quantity that measures real-world relationships, researchers often rely on a number of classical assumptions."}, {"text": "By itself, a regression is simply a calculation using the data. In order to interpret the output of a regression as a meaningful statistical quantity that measures real-world relationships, researchers often rely on a number of classical assumptions."}]}, {"question": "Which parameter in SVM is responsible for tradeoff between misclassification and simplicity of model", "positive_ctxs": [{"text": "The C parameter trades off misclassification of training examples against simplicity of the decision surface. A low C makes the decision surface smooth, while a high C aims at classifying all training examples correctly by giving the model freedom to select more samples as support vectors."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "refer to the data-optimized ordinary least squares solutions. We can then define the Lagrangian as a tradeoff between the in-sample accuracy of the data-optimized solutions and the simplicity of sticking to the hypothesized values."}, {"text": "Research has shown that unary coding is used in the neural circuits responsible for birdsong production. The use of unary in biological networks is presumably due to the inherent simplicity of the coding. Another contributing factor could be that unary coding provides a certain degree of error correction."}, {"text": "An estimator is a decision rule used for estimating a parameter. In this case the set of actions is the parameter space, and a loss function details the cost of the discrepancy between the true value of the parameter and the estimated value. For example, in a linear model with a single scalar parameter"}, {"text": "A large number of procedures have been developed for parameter estimation and inference in linear regression. These methods differ in computational simplicity of algorithms, presence of a closed-form solution, robustness with respect to heavy-tailed distributions, and theoretical assumptions needed to validate desirable statistical properties such as consistency and asymptotic efficiency."}, {"text": "A large number of procedures have been developed for parameter estimation and inference in linear regression. These methods differ in computational simplicity of algorithms, presence of a closed-form solution, robustness with respect to heavy-tailed distributions, and theoretical assumptions needed to validate desirable statistical properties such as consistency and asymptotic efficiency."}, {"text": "A large number of procedures have been developed for parameter estimation and inference in linear regression. These methods differ in computational simplicity of algorithms, presence of a closed-form solution, robustness with respect to heavy-tailed distributions, and theoretical assumptions needed to validate desirable statistical properties such as consistency and asymptotic efficiency."}, {"text": "A large number of procedures have been developed for parameter estimation and inference in linear regression. These methods differ in computational simplicity of algorithms, presence of a closed-form solution, robustness with respect to heavy-tailed distributions, and theoretical assumptions needed to validate desirable statistical properties such as consistency and asymptotic efficiency."}]}, {"question": "How does Average Linkage work in Hierarchical Agglomerative clustering", "positive_ctxs": [{"text": "Average-linkage is where the distance between each pair of observations in each cluster are added up and divided by the number of pairs to get an average inter-cluster distance. Average-linkage and complete-linkage are the two most popular distance metrics in hierarchical clustering."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "How much does the ball cost?\" many subjects incorrectly answer $0.10. An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do."}, {"text": "Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances."}, {"text": "Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances."}, {"text": "Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances."}, {"text": "Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances."}, {"text": "Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances."}, {"text": "Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances."}]}, {"question": "Is neural network a part of machine learning", "positive_ctxs": [{"text": "Each is essentially a component of the prior term. That is, machine learning is a subfield of artificial intelligence. Deep learning is a subfield of machine learning, and neural networks make up the backbone of deep learning algorithms."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Neuroevolution is commonly used as part of the reinforcement learning paradigm, and it can be contrasted with conventional deep learning techniques that use gradient descent on a neural network with a fixed topology."}, {"text": "LeNet is a convolutional neural network structure proposed by Yann LeCun et al. In general, LeNet refers to lenet-5 and is a simple convolutional neural network. Convolutional neural networks are a kind of feed-forward neural network whose artificial neurons can respond to a part of the surrounding cells in the coverage range and perform well in large-scale image processing."}, {"text": "Deep learning is a form of machine learning that utilizes a neural network to transform a set of inputs into a set of outputs via an artificial neural network. Deep learning methods, often using supervised learning with labeled datasets, have been shown to solve tasks that involve handling complex, high-dimensional raw input data such as images, with less manual feature engineering than prior methods, enabling significant progress in several fields including computer vision and natural language processing."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}]}, {"question": "What is data visualization and its techniques", "positive_ctxs": [{"text": "Data visualization refers to the techniques used to communicate data or information by encoding it as visual objects (e.g., points, lines or bars) contained in graphics. The goal is to communicate information clearly and efficiently to users. It is one of the steps in data analysis or data science."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "They specialized in visualization techniques for exploring and analyzing relational databases and data cubes, and started the company as a commercial outlet for research at Stanford from 1999 to 2002."}, {"text": "An in-depth, visual exploration of feature visualization and regularization techniques was published more recently.The cited resemblance of the imagery to LSD- and psilocybin-induced hallucinations is suggestive of a functional resemblance between artificial neural networks and particular layers of the visual cortex."}, {"text": "An in-depth, visual exploration of feature visualization and regularization techniques was published more recently.The cited resemblance of the imagery to LSD- and psilocybin-induced hallucinations is suggestive of a functional resemblance between artificial neural networks and particular layers of the visual cortex."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "An important part of data analysis and presentation is the visualization (or plotting) of data. The subject of plotting Likert (and other) rating data is discussed at length in two papers by Robbins and Heiberger. In the first they recommend the use of what they call diverging stacked bar charts and compare them to other plotting styles."}, {"text": "What emerges then is that info-gap theory is yet to explain in what way, if any, it actually attempts to deal with the severity of the uncertainty under consideration. Subsequent sections of this article will address this severity issue and its methodological and practical implications."}, {"text": "What is the underlying framework used to represent knowledge? Semantic networks were one of the first knowledge representation primitives. Also, data structures and algorithms for general fast search."}]}, {"question": "What does a distribution tell us about a set of data", "positive_ctxs": [{"text": "A data distribution is a function or a listing which shows all the possible values (or intervals) of the data. It also (and this is important) tells you how often each value occurs."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A probability distribution can be viewed as a partition of a set. One may then ask: if a set were partitioned randomly, what would the distribution of probabilities be? What would the expectation value of the mutual information be?"}, {"text": "Although the nomological network proposed a theory of how to strengthen constructs, it doesn't tell us how we can assess the construct validity in a study."}, {"text": "Although the nomological network proposed a theory of how to strengthen constructs, it doesn't tell us how we can assess the construct validity in a study."}, {"text": "Parametric statistics is a branch of statistics which assumes that sample data comes from a population that can be adequately modeled by a probability distribution that has a fixed set of parameters. Conversely a non-parametric model differs precisely in that it makes no assumptions about a parametric distribution when modeling the data."}, {"text": "A frequency distribution shows us a summarized grouping of data divided into mutually exclusive classes and the number of occurrences in a class. It is a way of showing unorganized data notably to show results of an election, income of people for a certain region, sales of a product within a certain period, student loan amounts of graduates, etc. Some of the graphs that can be used with frequency distributions are histograms, line charts, bar charts and pie charts."}, {"text": "A frequency distribution shows us a summarized grouping of data divided into mutually exclusive classes and the number of occurrences in a class. It is a way of showing unorganized data notably to show results of an election, income of people for a certain region, sales of a product within a certain period, student loan amounts of graduates, etc. Some of the graphs that can be used with frequency distributions are histograms, line charts, bar charts and pie charts."}, {"text": "A statistical hypothesis is a hypothesis that is testable on the basis of observed data modelled as the realised values taken by a collection of random variables. A set of data is modelled as being realised values of a collection of random variables having a joint probability distribution in some set of possible joint distributions. The hypothesis being tested is exactly that set of possible probability distributions."}]}, {"question": "For correlation coefficient between two random variables to be a meaningful measure of their linear association do the variables need to be normally distributed", "positive_ctxs": [{"text": "The correlation coefficient is a measure of the degree of linear association between two continuous variables, i.e. when plotted together, how close to a straight line is the scatter of points. Both x and y must be continuous random variables (and Normally distributed if the hypothesis test is to be valid)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed. If we are interested in finding to what extent there is a numerical relationship between two variables of interest, using their correlation coefficient will give misleading results if there is another, confounding, variable that is numerically related to both variables of interest. This misleading information can be avoided by controlling for the confounding variable, which is done by computing the partial correlation coefficient."}, {"text": "In statistics and in probability theory, distance correlation or distance covariance is a measure of dependence between two paired random vectors of arbitrary, not necessarily equal, dimension. The population distance correlation coefficient is zero if and only if the random vectors are independent. Thus, distance correlation measures both linear and nonlinear association between two random variables or random vectors."}, {"text": "When two or more random variables are defined on a probability space, it is useful to describe how they vary together; that is, it is useful to measure the relationship between the variables. A common measure of the relationship between two random variables is the covariance. Covariance is a measure of linear relationship between the random variables."}, {"text": "Commonly used measures of association for the chi-squared test are the Phi coefficient and Cram\u00e9r's V (sometimes referred to as Cram\u00e9r's phi and denoted as \u03c6c). Phi is related to the point-biserial correlation coefficient and Cohen's d and estimates the extent of the relationship between two variables (2 \u00d7 2). Cram\u00e9r's V may be used with variables having more than two levels."}, {"text": "If \u03c1XY equals +1 or \u22121, it can be shown that the points in the joint probability distribution that receive positive probability fall exactly along a straight line. Two random variables with nonzero correlation are said to be correlated. Similar to covariance, the correlation is a measure of the linear relationship between random variables."}, {"text": "In statistics, the Pearson correlation coefficient (PCC, pronounced ), also referred to as Pearson's r, the Pearson product-moment correlation coefficient (PPMCC), or the bivariate correlation, is a measure of linear correlation between two sets of data. It is the covariance of two variables, divided by the product of their standard deviations; thus it is essentially a normalised measurement of the covariance, such that the result always has a value between -1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationship or correlation."}, {"text": "In statistics, the Pearson correlation coefficient (PCC, pronounced ), also referred to as Pearson's r, the Pearson product-moment correlation coefficient (PPMCC), or the bivariate correlation, is a measure of linear correlation between two sets of data. It is the covariance of two variables, divided by the product of their standard deviations; thus it is essentially a normalised measurement of the covariance, such that the result always has a value between -1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationship or correlation."}]}, {"question": "What is a stochastic process provide an example", "positive_ctxs": [{"text": "A stochastic process is a family of random variables {X\u03b8}, where the parameter \u03b8 is drawn from an index set \u0398. For example, let's say the index set is \u201ctime\u201d. One example of a stochastic process that evolves over time is the number of customers (X) in a checkout line."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "An increment of a stochastic process is the difference between two random variables of the same stochastic process. For a stochastic process with an index set that can be interpreted as time, an increment is how much the stochastic process changes over a certain time period."}, {"text": "An increment of a stochastic process is the difference between two random variables of the same stochastic process. For a stochastic process with an index set that can be interpreted as time, an increment is how much the stochastic process changes over a certain time period."}, {"text": "A sequence of random variables forms a stationary stochastic process only if the random variables are identically distributed.A stochastic process with the above definition of stationarity is sometimes said to be strictly stationary, but there are other forms of stationarity. One example is when a discrete-time or continuous-time stochastic process"}, {"text": "A sequence of random variables forms a stationary stochastic process only if the random variables are identically distributed.A stochastic process with the above definition of stationarity is sometimes said to be strictly stationary, but there are other forms of stationarity. One example is when a discrete-time or continuous-time stochastic process"}, {"text": "A modification of a stochastic process is another stochastic process, which is closely related to the original stochastic process. More precisely, a stochastic process"}, {"text": "A modification of a stochastic process is another stochastic process, which is closely related to the original stochastic process. More precisely, a stochastic process"}, {"text": "Some authors regard a point process and stochastic process as two different objects such that a point process is a random object that arises from or is associated with a stochastic process, though it has been remarked that the difference between point processes and stochastic processes is not clear.Other authors consider a point process as a stochastic process, where the process is indexed by sets of the underlying space on which it is defined, such as the real line or"}]}, {"question": "How do you find the covariance of a random variable", "positive_ctxs": [{"text": "The covariance between X and Y is defined as Cov(X,Y)=E[(X\u2212EX)(Y\u2212EY)]=E[XY]\u2212(EX)(EY).The covariance has the following properties:Cov(X,X)=Var(X);if X and Y are independent then Cov(X,Y)=0;Cov(X,Y)=Cov(Y,X);Cov(aX,Y)=aCov(X,Y);Cov(X+c,Y)=Cov(X,Y);Cov(X+Y,Z)=Cov(X,Z)+Cov(Y,Z);more generally,"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the lesser values (that is, the variables tend to show similar behavior), the covariance is positive. In the opposite case, when the greater values of one variable mainly correspond to the lesser values of the other, (that is, the variables tend to show opposite behavior), the covariance is negative."}, {"text": "This definition encompasses random variables that are generated by processes that are discrete, continuous, neither, or mixed. The variance can also be thought of as the covariance of a random variable with itself:"}, {"text": "This definition encompasses random variables that are generated by processes that are discrete, continuous, neither, or mixed. The variance can also be thought of as the covariance of a random variable with itself:"}, {"text": "This definition encompasses random variables that are generated by processes that are discrete, continuous, neither, or mixed. The variance can also be thought of as the covariance of a random variable with itself:"}, {"text": "Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by"}, {"text": "Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by"}]}, {"question": "What is the basic shape of the chi square distribution", "positive_ctxs": [{"text": "The chi-square distribution curve is skewed to the right, and its shape depends on the degrees of freedom df. For df > 90, the curve approximates the normal distribution. Test statistics based on the chi-square distribution are always greater than or equal to zero."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "A random variable that is a function of the random sample and of the unknown parameter, but whose probability distribution does not depend on the unknown parameter is called a pivotal quantity or pivot. Widely used pivots include the z-score, the chi square statistic and Student's t-value."}, {"text": "A random variable that is a function of the random sample and of the unknown parameter, but whose probability distribution does not depend on the unknown parameter is called a pivotal quantity or pivot. Widely used pivots include the z-score, the chi square statistic and Student's t-value."}]}, {"question": "What is the average Gini coefficient", "positive_ctxs": [{"text": "The Gini coefficient for the entire world has been estimated by various parties to be between 0.61 and 0.68."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The Gini coefficient is a relative measure. It is possible for the Gini coefficient of a developing country to rise (due to increasing inequality of income) while the number of people in absolute poverty decreases. This is because the Gini coefficient measures relative, not absolute, wealth."}, {"text": "Both countries have a Gini coefficient of 0.2, but the average income distributions for household groups are different. As another example, in a population where the lowest 50% of individuals have no income and the other 50% have equal income, the Gini coefficient is 0.5; whereas for another population where the lowest 75% of people have 25% of income and the top 25% have 75% of the income, the Gini index is also 0.5. Economies with similar incomes and Gini coefficients can have very different income distributions."}, {"text": "The Gini coefficient can also be calculated directly from the cumulative distribution function of the distribution F(y). Defining \u03bc as the mean of the distribution, and specifying that F(y) is zero for all negative values, the Gini coefficient is given by:"}, {"text": "As with other inequality coefficients, the Gini coefficient is influenced by the granularity of the measurements. For example, five 20% quantiles (low granularity) will usually yield a lower Gini coefficient than twenty 5% quantiles (high granularity) for the same distribution. Philippe Monfort has shown that using inconsistent or unspecified granularity limits the usefulness of Gini coefficient measurements.The Gini coefficient measure gives different results when applied to individuals instead of households, for the same economy and same income distributions."}, {"text": "like the Gini coefficient which is constrained to be between 0 and 1). It is, however, more mathematically tractable than the Gini coefficient."}, {"text": "An alternative approach is to define the Gini coefficient as half of the relative mean absolute difference, which is mathematically equivalent to the definition based on the Lorenz curve. The mean absolute difference is the average absolute difference of all pairs of items of the population, and the relative mean absolute difference is the mean absolute difference divided by the average,"}, {"text": "Thus a given economy may have a higher Gini coefficient at any one point in time compared to another, while the Gini coefficient calculated over individuals' lifetime income is actually lower than the apparently more equal (at a given point in time) economy's. Essentially, what matters is not just inequality in any particular year, but the composition of the distribution over time."}]}, {"question": "What is an acceptable R squared value", "positive_ctxs": [{"text": "R-squared should accurately reflect the percentage of the dependent variable variation that the linear model explains. Your R2 should not be any higher or lower than this value. However, if you analyze a physical process and have very good measurements, you might expect R-squared values over 90%."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "as the basis for optimality. This error term is then squared and the expected value of this squared value is minimized for the MMSE estimator."}, {"text": "It is common to make decisions under uncertainty. What can be done to make good (or at least the best possible) decisions under conditions of uncertainty? Info-gap robustness analysis evaluates each feasible decision by asking: how much deviation from an estimate of a parameter value, function, or set, is permitted and yet \"guarantee\" acceptable performance?"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Before the test is actually performed, the maximum acceptable probability of a Type I error (\u03b1) is determined. Typically, values in the range of 1% to 5% are selected. (If the maximum acceptable error rate is zero, an infinite number of correct guesses is required.)"}, {"text": "Before the test is actually performed, the maximum acceptable probability of a Type I error (\u03b1) is determined. Typically, values in the range of 1% to 5% are selected. (If the maximum acceptable error rate is zero, an infinite number of correct guesses is required.)"}, {"text": "Before the test is actually performed, the maximum acceptable probability of a Type I error (\u03b1) is determined. Typically, values in the range of 1% to 5% are selected. (If the maximum acceptable error rate is zero, an infinite number of correct guesses is required.)"}]}, {"question": "What is the rule for rejecting Ho in terms of Z", "positive_ctxs": [{"text": "The decision rule is: Reject H0 if Z < 1.645. The decision rule is: Reject H0 if Z < -1.960 or if Z > 1.960. The complete table of critical values of Z for upper, lower and two-tailed tests can be found in the table of Z values to the right in \"Other Resources.\""}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Cramer's rule is a closed-form expression, in terms of determinants, of the solution of a system of n linear equations in n unknowns. Cramer's rule is useful for reasoning about the solution, but, except for n = 2 or 3, it is rarely used for computing a solution, since Gaussian elimination is a faster algorithm."}, {"text": "The likelihood ratio is also of central importance in Bayesian inference, where it is known as the Bayes factor, and is used in Bayes' rule. Stated in terms of odds, Bayes' rule is that the posterior odds of two alternatives,"}, {"text": "The likelihood ratio is also of central importance in Bayesian inference, where it is known as the Bayes factor, and is used in Bayes' rule. Stated in terms of odds, Bayes' rule is that the posterior odds of two alternatives,"}, {"text": "In probability theory, the chain rule (also called the general product rule) permits the calculation of any member of the joint distribution of a set of random variables using only conditional probabilities. The rule is useful in the study of Bayesian networks, which describe a probability distribution in terms of conditional probabilities."}, {"text": "When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified."}, {"text": "When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified."}, {"text": "When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified."}]}, {"question": "What is a linear SVM", "positive_ctxs": [{"text": "Suggest Edits. Support Vector Machines (SVMs) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "The hinge loss is a convex function, so many of the usual convex optimizers used in machine learning can work with it. It is not differentiable, but has a subgradient with respect to model parameters w of a linear SVM with score function"}, {"text": "The hinge loss is a convex function, so many of the usual convex optimizers used in machine learning can work with it. It is not differentiable, but has a subgradient with respect to model parameters w of a linear SVM with score function"}, {"text": "Support vector machines (SVMs), also known as support vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other. An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting."}, {"text": "Support vector machines (SVMs), also known as support vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other. An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting."}]}, {"question": "Can deep learning be unsupervised", "positive_ctxs": [{"text": "Unsupervised learning is the Holy Grail of Deep Learning. The goal of unsupervised learning is to create general systems that can be trained with little data. Today Deep Learning models are trained on large supervised datasets. Meaning that for each data, there is a corresponding label."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}, {"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}, {"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}, {"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}, {"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}, {"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}, {"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}]}, {"question": "How do you find the limit of a function with two variables", "positive_ctxs": [{"text": "1:1111:18Suggested clip \u00b7 91 secondsLimits of Functions of Two Variables - YouTubeYouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The joint probability distribution can be expressed either in terms of a joint cumulative distribution function or in terms of a joint probability density function (in the case of continuous variables) or joint probability mass function (in the case of discrete variables). These in turn can be used to find two other types of distributions: the marginal distribution giving the probabilities for any one of the variables with no reference to any specific ranges of values for the other variables, and the conditional probability distribution giving the probabilities for any subset of the variables conditional on particular values of the remaining variables."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "The power of the test is the probability that the test will find a statistically significant difference between men and women, as a function of the size of the true difference between those two populations."}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}]}, {"question": "How do I use Word embeds for text classification", "positive_ctxs": [{"text": "Text classification using word embeddings and deep learning in python \u2014 classifying tweets from twitterSplit the data into text (X) and labels (Y)Preprocess X.Create a word embedding matrix from X.Create a tensor input from X.Train a deep learning model using the tensor inputs and labels (Y)More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Word segmentation \u2013 separates a chunk of continuous text into separate words. For a language like English, this is fairly trivial, since words are usually separated by spaces. However, some written languages like Chinese, Japanese and Thai do not mark word boundaries in such a fashion, and in those languages text segmentation is a significant task requiring knowledge of the vocabulary and morphology of words in the language."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "AR systems such as Word Lens can interpret the foreign text on signs and menus and, in a user's augmented view, re-display the text in the user's language. Spoken words of a foreign language can be translated and displayed in a user's view as printed subtitles."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The likelihood-ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e. on what probability of Type I error is considered tolerable (Type I errors consist of the rejection of a null hypothesis that is true)."}, {"text": "The documents to be classified may be texts, images, music, etc. Each kind of document possesses its special classification problems. When not otherwise specified, text classification is implied."}, {"text": "Sukhotin's algorithm \u2013 statistical classification algorithm for classifying characters in a text as vowels or consonants. It was initially created by Boris V. Sukhotin."}]}, {"question": "What is the mean of a Gaussian distribution", "positive_ctxs": [{"text": "Gaussian Distribution Function The nature of the gaussian gives a probability of 0.683 of being within one standard deviation of the mean. The mean value is a=np where n is the number of events and p the probability of any integer value of x (this expression carries over from the binomial distribution )."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A Gaussian process (GP) is a stochastic process in which any finite number of random variables that are sampled follow a joint Normal distribution. The mean vector and covariance matrix of the Gaussian distribution completely specify the GP. GPs are usually used as a priori distribution for functions, and as such the mean vector and covariance matrix can be viewed as functions, where the covariance function is also called the kernel of the GP."}, {"text": "where \u03a6(\u00b7) is the cumulative distribution function of a Gaussian distribution with zero mean and unit standard deviation, and N is the sample size. This z-transform is approximate and that the actual distribution of the sample (partial) correlation coefficient is not straightforward. However, an exact t-test based on a combination of the partial regression coefficient, the partial correlation coefficient and the partial variances is available.The distribution of the sample partial correlation was described by Fisher."}, {"text": "the q-Gaussian is an analogue of the Gaussian distribution, in the sense that it maximises the Tsallis entropy, and is one type of Tsallis distribution. Note that this distribution is different from the Gaussian q-distribution above.A random variable X has a two-piece normal distribution if it has a distribution"}, {"text": "the q-Gaussian is an analogue of the Gaussian distribution, in the sense that it maximises the Tsallis entropy, and is one type of Tsallis distribution. Note that this distribution is different from the Gaussian q-distribution above.A random variable X has a two-piece normal distribution if it has a distribution"}, {"text": "the q-Gaussian is an analogue of the Gaussian distribution, in the sense that it maximises the Tsallis entropy, and is one type of Tsallis distribution. Note that this distribution is different from the Gaussian q-distribution above.A random variable X has a two-piece normal distribution if it has a distribution"}, {"text": "the q-Gaussian is an analogue of the Gaussian distribution, in the sense that it maximises the Tsallis entropy, and is one type of Tsallis distribution. Note that this distribution is different from the Gaussian q-distribution above.A random variable X has a two-piece normal distribution if it has a distribution"}, {"text": "the q-Gaussian is an analogue of the Gaussian distribution, in the sense that it maximises the Tsallis entropy, and is one type of Tsallis distribution. Note that this distribution is different from the Gaussian q-distribution above.A random variable X has a two-piece normal distribution if it has a distribution"}]}, {"question": "What is dimensional analysis and how do we use it", "positive_ctxs": [{"text": "Dimensional Analysis (also called Factor-Label Method or the Unit Factor Method) is a problem-solving method that uses the fact that any number or expression can be multiplied by one without changing its value. It is a useful technique."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In finance, economics, and accounting, dimensional analysis is most commonly referred to in terms of the distinction between stocks and flows. More generally, dimensional analysis is used in interpreting various financial ratios, economics ratios, and accounting ratios."}, {"text": "What we do know, however, is that according to our working assumptions the estimate we have is a poor indication of the true value of the revenue and is likely to be substantially wrong. So, methodologically speaking, we have to display the true value at a distance from its estimate. In fact, it would be even more enlightening to display a number of possible true values ."}, {"text": "We'd perform none to verify that the energy is proportional to the tension. Or perhaps we might guess that the energy is proportional to \u2113, and so infer that E = \u2113s. The power of dimensional analysis as an aid to experiment and forming hypotheses becomes evident."}, {"text": "The learning element uses feedback from the \"critic\" on how the agent is doing and determines how the performance element, or \"actor\", should be modified to do better in the future. The performance element is what we have previously considered to be the entire agent: it takes in percepts and decides on actions."}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}]}, {"question": "What is the relationship between Poisson and exponential distribution", "positive_ctxs": [{"text": "Just so, the Poisson distribution deals with the number of occurrences in a fixed period of time, and the exponential distribution deals with the time between occurrences of successive events as time flows by continuously."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The confidence interval for the mean of a Poisson distribution can be expressed using the relationship between the cumulative distribution functions of the Poisson and chi-squared distributions. The chi-squared distribution is itself closely related to the gamma distribution, and this leads to an alternative expression. Given an observation k from a Poisson distribution with mean \u03bc, a confidence interval for \u03bc with confidence level 1 \u2013 \u03b1 is"}, {"text": "In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless."}, {"text": "The link function provides the relationship between the linear predictor and the mean of the distribution function. There are many commonly used link functions, and their choice is informed by several considerations. There is always a well-defined canonical link function which is derived from the exponential of the response's density function."}, {"text": "The link function provides the relationship between the linear predictor and the mean of the distribution function. There are many commonly used link functions, and their choice is informed by several considerations. There is always a well-defined canonical link function which is derived from the exponential of the response's density function."}, {"text": "The parameterization with \u03b1 and \u03b2 is more common in Bayesian statistics, where the gamma distribution is used as a conjugate prior distribution for various types of inverse scale (rate) parameters, such as the \u03bb of an exponential distribution or a Poisson distribution \u2013 or for that matter, the \u03b2 of the gamma distribution itself. The closely related inverse-gamma distribution is used as a conjugate prior for scale parameters, such as the variance of a normal distribution."}, {"text": "Other examples of unimodal distributions include Cauchy distribution, Student's t-distribution, chi-squared distribution and exponential distribution. Among discrete distributions, the binomial distribution and Poisson distribution can be seen as unimodal, though for some parameters they can have two adjacent values with the same probability."}, {"text": "the Conditional Normalized Maximum Likelihood (CNML) predictive distribution, from information theoretic considerations.The accuracy of a predictive distribution may be measured using the distance or divergence between the true exponential distribution with rate parameter, \u03bb0, and the predictive distribution based on the sample x. The Kullback\u2013Leibler divergence is a commonly used, parameterisation free measure of the difference between two distributions. Letting \u0394(\u03bb0||p) denote the Kullback\u2013Leibler divergence between an exponential with rate parameter \u03bb0 and a predictive distribution p it can be shown that"}]}, {"question": "Which method is used for predicting continuous dependent variable", "positive_ctxs": [{"text": "Regression analysis is used when you want to predict a continuous dependent variable from a number of independent variables. If the dependent variable is dichotomous, then logistic regression should be used."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Like other forms of regression analysis, logistic regression makes use of one or more predictor variables that may be either continuous or categorical. Unlike ordinary linear regression, however, logistic regression is used for predicting dependent variables that take membership in one of a limited number of categories (treating the dependent variable in the binomial case as the outcome of a Bernoulli trial) rather than a continuous outcome. Given this difference, the assumptions of linear regression are violated."}, {"text": "Like other forms of regression analysis, logistic regression makes use of one or more predictor variables that may be either continuous or categorical. Unlike ordinary linear regression, however, logistic regression is used for predicting dependent variables that take membership in one of a limited number of categories (treating the dependent variable in the binomial case as the outcome of a Bernoulli trial) rather than a continuous outcome. Given this difference, the assumptions of linear regression are violated."}, {"text": "Like other forms of regression analysis, logistic regression makes use of one or more predictor variables that may be either continuous or categorical. Unlike ordinary linear regression, however, logistic regression is used for predicting dependent variables that take membership in one of a limited number of categories (treating the dependent variable in the binomial case as the outcome of a Bernoulli trial) rather than a continuous outcome. Given this difference, the assumptions of linear regression are violated."}, {"text": "The two-group method should be used when the dependent variable has two categories or states. The multiple discriminant method is used when the dependent variable has three or more categorical states. Use Wilks's Lambda to test for significance in SPSS or F stat in SAS."}, {"text": "The two-group method should be used when the dependent variable has two categories or states. The multiple discriminant method is used when the dependent variable has three or more categorical states. Use Wilks's Lambda to test for significance in SPSS or F stat in SAS."}, {"text": "The two-group method should be used when the dependent variable has two categories or states. The multiple discriminant method is used when the dependent variable has three or more categorical states. Use Wilks's Lambda to test for significance in SPSS or F stat in SAS."}, {"text": "The two-group method should be used when the dependent variable has two categories or states. The multiple discriminant method is used when the dependent variable has three or more categorical states. Use Wilks's Lambda to test for significance in SPSS or F stat in SAS."}]}, {"question": "Why do you use the second derivative test", "positive_ctxs": [{"text": "The second derivative may be used to determine local extrema of a function under certain conditions. If a function has a critical point for which f\u2032(x) = 0 and the second derivative is positive at this point, then f has a local minimum here. This technique is called Second Derivative Test for Local Extrema."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "While the first derivative test identifies points that might be extrema, this test does not distinguish a point that is a minimum from one that is a maximum or one that is neither. When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called the Hessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called the bordered Hessian in constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called 'second-order conditions' (see 'Second derivative test')."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "It is the graph of a function, with derivative 2ax + b, and second derivative 2a. So, the signed curvature is"}, {"text": "The function x3/3 \u2212 x has first derivative x2 \u2212 1 and second derivative 2x. Setting the first derivative to 0 and solving for x gives stationary points at \u22121 and +1. From the sign of the second derivative, we can see that \u22121 is a local maximum and +1 is a local minimum."}, {"text": "Consider the parametrization \u03b3(t) = (t, at2 + bt + c) = (x, y). The first derivative of x is 1, and the second derivative is zero. Substituting into the formula for general parametrizations gives exactly the same result as above, with x replaced by t. If we use primes for derivatives with respect to the parameter t."}]}, {"question": "What is the difference between cluster and multistage sampling", "positive_ctxs": [{"text": "With cluster sampling, in contrast, the sample includes elements only from sampled clusters. Multistage sampling. With multistage sampling, we select a sample by using combinations of different sampling methods. For example, in Stage 1, we might use cluster sampling to choose clusters from a population."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "Two-stage cluster sampling, a simple case of multistage sampling, is obtained by selecting cluster samples in the first stage and then selecting a sample of elements from every sampled cluster. Consider a population of N clusters in total. In the first stage, n clusters are selected using ordinary cluster sampling method."}, {"text": "Two-stage cluster sampling, a simple case of multistage sampling, is obtained by selecting cluster samples in the first stage and then selecting a sample of elements from every sampled cluster. Consider a population of N clusters in total. In the first stage, n clusters are selected using ordinary cluster sampling method."}, {"text": "Cluster sampling is commonly implemented as multistage sampling. This is a complex form of cluster sampling in which two or more levels of units are embedded one in the other. The first stage consists of constructing the clusters that will be used to sample from."}, {"text": "Cluster sampling is commonly implemented as multistage sampling. This is a complex form of cluster sampling in which two or more levels of units are embedded one in the other. The first stage consists of constructing the clusters that will be used to sample from."}, {"text": "The sampling error is the error caused by observing a sample instead of the whole population. The sampling error is the difference between a sample statistic used to estimate a population parameter and the actual but unknown value of the parameter."}]}, {"question": "What is minimum variance of an estimator", "positive_ctxs": [{"text": "In statistics a minimum-variance unbiased estimator (MVUE) or uniformly minimum-variance unbiased estimator (UMVUE) is an unbiased estimator that has lower variance than any other unbiased estimator for all possible values of the parameter."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Among unbiased estimators, there often exists one with the lowest variance, called the minimum variance unbiased estimator (MVUE). In some cases an unbiased efficient estimator exists, which, in addition to having the lowest variance among unbiased estimators, satisfies the Cram\u00e9r\u2013Rao bound, which is an absolute lower bound on variance for statistics of a variable."}, {"text": "Minimizing MSE is a key criterion in selecting estimators: see minimum mean-square error. Among unbiased estimators, minimizing the MSE is equivalent to minimizing the variance, and the estimator that does this is the minimum variance unbiased estimator. However, a biased estimator may have lower MSE; see estimator bias."}, {"text": "Minimizing MSE is a key criterion in selecting estimators: see minimum mean-square error. Among unbiased estimators, minimizing the MSE is equivalent to minimizing the variance, and the estimator that does this is the minimum variance unbiased estimator. However, a biased estimator may have lower MSE; see estimator bias."}, {"text": "We say that the estimator is a finite-sample efficient estimator (in the class of unbiased estimators) if it reaches the lower bound in the Cram\u00e9r\u2013Rao inequality above, for all \u03b8 \u2208 \u0398. Efficient estimators are always minimum variance unbiased estimators. However the converse is false: There exist point-estimation problems for which the minimum-variance mean-unbiased estimator is inefficient.Historically, finite-sample efficiency was an early optimality criterion."}, {"text": "We say that the estimator is a finite-sample efficient estimator (in the class of unbiased estimators) if it reaches the lower bound in the Cram\u00e9r\u2013Rao inequality above, for all \u03b8 \u2208 \u0398. Efficient estimators are always minimum variance unbiased estimators. However the converse is false: There exist point-estimation problems for which the minimum-variance mean-unbiased estimator is inefficient.Historically, finite-sample efficiency was an early optimality criterion."}, {"text": "We say that the estimator is a finite-sample efficient estimator (in the class of unbiased estimators) if it reaches the lower bound in the Cram\u00e9r\u2013Rao inequality above, for all \u03b8 \u2208 \u0398. Efficient estimators are always minimum variance unbiased estimators. However the converse is false: There exist point-estimation problems for which the minimum-variance mean-unbiased estimator is inefficient.Historically, finite-sample efficiency was an early optimality criterion."}, {"text": "is uniformly minimum variance unbiased (UMVU), which makes it the \"best\" estimator among all unbiased ones. However it can be shown that the biased estimator"}]}, {"question": "Can a weak correlation be significant", "positive_ctxs": [{"text": "Do not confuse statistical significance with practical importance. However, a weak correlation can be statistically significant, if the sample size is large enough."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This topic is of particular importance because in some cases data aggregation can obscure a strong correlation between variables, making the relationship appear weak or even negative. Conversely, MAUP can cause random variables to appear as if there is a significant association where there is not. Multivariate regression parameters are more sensitive to MAUP than correlation coefficients."}, {"text": "For example, a sample Pearson correlation coefficient of 0.01 is statistically significant if the sample size is 1000. Reporting only the significant p-value from this analysis could be misleading if a correlation of 0.01 is too small to be of interest in a particular application."}, {"text": "Mixed logit- Allows any form of correlation and substitution patterns. When a mixed logit is with jointly normal random terms, the models is sometimes called \"multinomial probit model with logit kernel\". Can be applied to route choice.The following sections describe Nested Logit, GEV, Probit, and Mixed Logit models in detail."}, {"text": "Effect size is a measure of a study's practical significance. A statistically significant result may have a weak effect. To gauge the research significance of their result, researchers are encouraged to always report an effect size along with p-values."}, {"text": "Effect size is a measure of a study's practical significance. A statistically significant result may have a weak effect. To gauge the research significance of their result, researchers are encouraged to always report an effect size along with p-values."}, {"text": "This also reveals weaknesses of significance testing: A result can be significant without a good estimate of the strength of a relationship; significance can be a modest goal. A weak relationship can also achieve significance with enough data. Reporting both significance and confidence intervals is commonly recommended."}, {"text": "somewhat more money, or moderate utility increase) for middle-incoming people; would cause significant benefits for high-income people. On the other hand, the left-of-center party might be expected to raise taxes and offset it with increased welfare and other assistance for the lower and middle classes. This would cause significant positive benefit to low-income people, perhaps a weak benefit to middle-income people, and significant negative benefit to high-income people."}]}, {"question": "What is an example of active learning", "positive_ctxs": [{"text": "In active learning teachers are facilitators rather than one way providers of information. Other examples of active learning techniques include role-playing, case studies, group projects, think-pair-share, peer teaching, debates, Just-in-Time Teaching, and short demonstrations followed by class discussion."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "There are a wide range of alternatives for the term active learning, such as: learning through play, technology-based learning, activity-based learning, group work, project method, etc. The common factors in these are some significant qualities and characteristics of active learning. Active learning is the opposite of passive learning; it is learner-centered, not teacher-centered, and requires more than just listening; the active participation of each and every student is a necessary aspect in active learning."}, {"text": "Many research studies have proven that active learning as a strategy has promoted achievement levels and some others say that content mastery is possible through active learning strategies. However, some students as well as teachers find it difficult to adapt to the new learning technique.There is intensive use of scientific and quantitative literacy across the curriculum, and technology-based learning is also in high demand in concern with active learning.Barnes (1989) suggested principles of active learning:"}, {"text": "A reaction to a video is also an example of active learning because most students love to watch movies. The video helps the student to understand what they are learning at the time in an alternative presentation mode. Make sure that the video relates to the topic that they are studying at the moment."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}]}, {"question": "What is meant by Overfitting of data", "positive_ctxs": [{"text": "Overfitting is a modeling error that occurs when a function is too closely fit to a limited set of data points. Thus, attempting to make the model conform too closely to slightly inaccurate data can infect the model with substantial errors and reduce its predictive power."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "But the original use of the phrase \"complete Archimedean field\" was by David Hilbert, who meant still something else by it. He meant that the real numbers form the largest Archimedean field in the sense that every other Archimedean field is a subfield of"}, {"text": "Consider the ordered list {1,2,3,4} which contains four data values. What is the 75th percentile of this list using the Microsoft Excel method?"}, {"text": "What happened however is that the packed SIMD register holds a certain amount of data so it's not possible to get more parallelism. The speed up is somewhat limited by the assumption we made of performing four parallel operations (please note this is common for both AltiVec and SSE)."}, {"text": "Consider the ordered list {15, 20, 35, 40, 50}, which contains five data values. What is the 40th percentile of this list using the NIST method?"}, {"text": "Consider the ordered list {15, 20, 35, 40, 50}, which contains five data values. What is the 40th percentile of this list using this variant method?"}]}, {"question": "What is Eliza in artificial intelligence", "positive_ctxs": [{"text": "ELIZA is an early natural language processing computer program created from 1964 to 1966 at the MIT Artificial Intelligence Laboratory by Joseph Weizenbaum. As such, ELIZA was one of the first chatterbots and one of the first programs capable of attempting the Turing test."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks."}, {"text": "The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks."}, {"text": "The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks."}, {"text": "Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes \"superintelligent\", then it could become difficult or impossible for humans to control."}, {"text": "In computer science, a rule-based system is used to store and manipulate knowledge to interpret information in a useful way. It is often used in artificial intelligence applications and research."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}]}, {"question": "How do Sobel filters work", "positive_ctxs": [{"text": "The Sobel filter is used for edge detection. It works by calculating the gradient of image intensity at each pixel within the image. It finds the direction of the largest increase from light to dark and the rate of change in that direction."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "The Sobel operator, sometimes called the Sobel\u2013Feldman operator or Sobel filter, is used in image processing and computer vision, particularly within edge detection algorithms where it creates an image emphasising edges. It is named after Irwin Sobel and Gary Feldman, colleagues at the Stanford Artificial Intelligence Laboratory (SAIL). Sobel and Feldman presented the idea of an \"Isotropic 3x3 Image Gradient Operator\" at a talk at SAIL in 1968."}, {"text": "As a consequence of its definition, the Sobel operator can be implemented by simple means in both hardware and software: only eight image points around a point are needed to compute the corresponding result and only integer arithmetic is needed to compute the gradient vector approximation. Furthermore, the two discrete filters described above are both separable:"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "A similar optimization strategy and resulting filters were also presented by Farid and Simoncelli. They also investigate higher-order derivative schemes. In contrast to the work of Scharr, these filters are not enforced to be numerically consistent."}, {"text": "Collaborative filters are expected to increase diversity because they help us discover new products. Some algorithms, however, may unintentionally do the opposite. Because collaborative filters recommend products based on past sales or ratings, they cannot usually recommend products with limited historical data."}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}]}, {"question": "What is text Matrix", "positive_ctxs": [{"text": "A term-document matrix represents the processed text from a text analysis as a table or matrix where the rows represent the text responses, or documents, and the columns represent the words or phrases (the terms). matrix)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Matrix calculus is used for deriving optimal stochastic estimators, often involving the use of Lagrange multipliers. This includes the derivation of:"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Text to Matrix Generator (TMG) MATLAB toolbox that can be used for various tasks in text mining (TM) specifically i) indexing, ii) retrieval, iii) dimensionality reduction, iv) clustering, v) classification. The indexing step offers the user the ability to apply local and global weighting methods, including tf\u2013idf."}, {"text": "An algebraic formulation of the above can be obtained by using the min-plus algebra. Matrix multiplication in this system is defined as follows: Given two"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}]}, {"question": "Why do we use non probability sampling", "positive_ctxs": [{"text": "Non-probability sampling is often used because the procedures used to select units for inclusion in a sample are much easier, quicker and cheaper when compared with probability sampling. This is especially the case for convenience sampling."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "One approach to inference uses large sample approximations to the sampling distribution of the log odds ratio (the natural logarithm of the odds ratio). If we use the joint probability notation defined above, the population log odds ratio is"}, {"text": "given all the observed data. Because the prior is unspecified, we seek to do this without knowledge of G.Under squared error loss (SEL), the conditional expectation E(\u03b8i | Yi = yi) is a reasonable quantity to use for prediction. For the Poisson compound sampling model, this quantity is"}, {"text": "These metaphors are prevalent in communication and we do not just use them in language; we actually perceive and act in accordance with the metaphors."}, {"text": "To do that, we need to perform the relevant integration by substitution: thus, we need to multiply by the derivative of the (natural) logarithm function, which is 1/y. Hence, the transformed distribution has the following probability density function:"}, {"text": "Several efficient algorithms for simple random sampling have been developed. A naive algorithm is the draw-by-draw algorithm where at each step we remove the item at that step from the set with equal probability and put the item in the sample. We continue until we have sample of desired size"}, {"text": "Several efficient algorithms for simple random sampling have been developed. A naive algorithm is the draw-by-draw algorithm where at each step we remove the item at that step from the set with equal probability and put the item in the sample. We continue until we have sample of desired size"}, {"text": "Several efficient algorithms for simple random sampling have been developed. A naive algorithm is the draw-by-draw algorithm where at each step we remove the item at that step from the set with equal probability and put the item in the sample. We continue until we have sample of desired size"}]}, {"question": "What language is used for data mining", "positive_ctxs": [{"text": "R is now used by over 50% of data miners. R, Python, and SQL were the most popular programming languages. Python, Lisp/Clojure, and Unix tools showest the highest growth in 2012, while Java and MATLAB slightly declined in popularity."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Data mining is used wherever there is digital data available today. Notable examples of data mining can be found throughout business, medicine, science, and surveillance."}, {"text": "Depending on the amount and format of the incoming data, data wrangling has traditionally been performed manually (e.g. via spreadsheets such as Excel), tools like KNIME or via scripts in languages such as Python or SQL. R, a language often used in data mining and statistical data analysis, is now also often used for data wrangling."}, {"text": "Data wrangling is a superset of data mining and requires processes that some data mining uses, but not always. The process of data mining is to find patterns within large data sets, where data wrangling transforms data in order to deliver insights about that data. Even though data wrangling is a superset of data mining does not mean that data mining does not use it, there are many use cases for data wrangling in data mining."}, {"text": "In order for neural network models to be shared by different applications, a common language is necessary. The Predictive Model Markup Language (PMML) has been proposed to address this need. PMML is an XML-based language which provides a way for applications to define and share neural network models (and other data mining models) between PMML compliant applications."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "For exchanging the extracted models\u2014in particular for use in predictive analytics\u2014the key standard is the Predictive Model Markup Language (PMML), which is an XML-based language developed by the Data Mining Group (DMG) and supported as exchange format by many data mining applications. As the name suggests, it only covers prediction models, a particular data mining task of high importance to business applications. However, extensions to cover (for example) subspace clustering have been proposed independently of the DMG."}, {"text": "What is the underlying framework used to represent knowledge? Semantic networks were one of the first knowledge representation primitives. Also, data structures and algorithms for general fast search."}]}, {"question": "What is tensor quantity", "positive_ctxs": [{"text": "A tensor is a quantity, for example a stress or a strain, which has magnitude, direction, and a plane in which it acts. Stress and strain are both tensor quantities. A tensor is a quantity, for example a stress or a strain, which has magnitude, direction, and a plane in which it acts."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In mechanics, the eigenvectors of the moment of inertia tensor define the principal axes of a rigid body. The tensor of moment of inertia is a key quantity required to determine the rotation of a rigid body around its center of mass."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Here w is called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor. An example of a tensor density is the current density of electromagnetism."}, {"text": "Here w is called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor. An example of a tensor density is the current density of electromagnetism."}, {"text": "Thus, the TVP of a tensor to a P-dimensional vector consists of P projections from the tensor to a scalar. The projection from a tensor to a scalar is an elementary multilinear projection (EMP). In EMP, a tensor is projected to a point through N unit projection vectors."}, {"text": "When the first factor is very large with respect to the other factors in the tensor product, then the tensor space essentially behaves as a matrix space. The generic rank of tensors living in an unbalanced tensor spaces is known to equal"}]}, {"question": "Is stochastic gradient descent linear", "positive_ctxs": [{"text": "Stochastic Gradient Descent (SGD) is a simple yet very efficient approach to fitting linear classifiers and regressors under convex loss functions such as (linear) Support Vector Machines and Logistic Regression."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}]}, {"question": "How is CAC ratio calculated", "positive_ctxs": [{"text": "The CAC ratio is calculated by looking at the quarter over quarter increase in gross margin divided by the total sales and marketing expenses for that quarter. Gross margin is the total revenue minus cost of goods sold."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The model fits well when the residuals (i.e., observed-expected) are close to 0, that is the closer the observed frequencies are to the expected frequencies the better the model fit. If the likelihood ratio chi-square statistic is non-significant, then the model fits well (i.e., calculated expected frequencies are close to observed frequencies). If the likelihood ratio chi-square statistic is significant, then the model does not fit well (i.e., calculated expected frequencies are not close to observed frequencies)."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "In statistics, when performing multiple comparisons, a false positive ratio (also known as fall-out or false alarm ratio) is the probability of falsely rejecting the null hypothesis for a particular test. The false positive rate is calculated as the ratio between the number of negative events wrongly categorized as positive (false positives) and the total number of actual negative events (regardless of classification)."}, {"text": "In statistics, when performing multiple comparisons, a false positive ratio (also known as fall-out or false alarm ratio) is the probability of falsely rejecting the null hypothesis for a particular test. The false positive rate is calculated as the ratio between the number of negative events wrongly categorized as positive (false positives) and the total number of actual negative events (regardless of classification)."}, {"text": "In one-way analysis of variance, MSE can be calculated by the division of the sum of squared errors and the degree of freedom. Also, the f-value is the ratio of the mean squared treatment and the MSE."}, {"text": "In one-way analysis of variance, MSE can be calculated by the division of the sum of squared errors and the degree of freedom. Also, the f-value is the ratio of the mean squared treatment and the MSE."}]}, {"question": "Will artificial intelligence supersede human intelligence", "positive_ctxs": [{"text": "The experts predict that AI will outperform humans in the next 10 years in tasks such as translating languages (by 2024), writing high school essays (by 2026), and driving trucks (by 2027). But many other tasks will take much longer for machines to master."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes \"superintelligent\", then it could become difficult or impossible for humans to control."}, {"text": "Kaplan and Haenlein structure artificial intelligence along three evolutionary stages: 1) artificial narrow intelligence \u2013 applying AI only to specific tasks; 2) artificial general intelligence \u2013 applying AI to several areas and able to autonomously solve problems they were never even designed for; and 3) artificial super intelligence \u2013 applying AI to any area capable of scientific creativity, social skills, and general wisdom.To allow comparison with human performance, artificial intelligence can be evaluated on constrained and well-defined problems. Such tests have been termed subject matter expert Turing tests. Also, smaller problems provide more achievable goals and there are an ever-increasing number of positive results."}, {"text": "Artificial general intelligence (AGI) is the hypothetical intelligence of a computer program that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, full AI,"}, {"text": "Artificial general intelligence (AGI) is the hypothetical intelligence of a computer program that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, full AI,"}, {"text": "Superintelligence \u2013 (hypothetical) artificial intelligence far surpassing that of the brightest and most gifted human minds. Due to recursive self-improvement, superintelligence is expected to be a rapid outcome of creating artificial general intelligence."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}]}, {"question": "What is the difference between general artificial intelligence and artificial intelligence", "positive_ctxs": [{"text": "Whereas AI is preprogrammed to carry out a task that a human can but more efficiently, artificial general intelligence (AGI) expects the machine to be just as smart as a human. A machine that was able to do this would be considered a fine example of AGI."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes \"superintelligent\", then it could become difficult or impossible for humans to control."}, {"text": "Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon predicted the following in 1965: \"machines will be capable, within twenty years, of doing any work a man can do\". At the other extreme, roboticist Alan Winfield claims the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical, faster than light spaceflight."}, {"text": "Superintelligence \u2013 (hypothetical) artificial intelligence far surpassing that of the brightest and most gifted human minds. Due to recursive self-improvement, superintelligence is expected to be a rapid outcome of creating artificial general intelligence."}, {"text": "Kaplan and Haenlein structure artificial intelligence along three evolutionary stages: 1) artificial narrow intelligence \u2013 applying AI only to specific tasks; 2) artificial general intelligence \u2013 applying AI to several areas and able to autonomously solve problems they were never even designed for; and 3) artificial super intelligence \u2013 applying AI to any area capable of scientific creativity, social skills, and general wisdom.To allow comparison with human performance, artificial intelligence can be evaluated on constrained and well-defined problems. Such tests have been termed subject matter expert Turing tests. Also, smaller problems provide more achievable goals and there are an ever-increasing number of positive results."}, {"text": "Artificial general intelligence (AGI) is the hypothetical intelligence of a computer program that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, full AI,"}]}, {"question": "What is the relationship between the linear correlation coefficient r and the slope b 1 of a regression line", "positive_ctxs": [{"text": "Any point directly on the y-axis has an X value of 0. Multiple Choice: In a simple Linear regression problem, r and b1. Explanation: r= correlation coefficient and b1= slope. If we have a downward sloping trend-line then that means we have a negative (or inverse) correlation coefficient."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is a corollary of the Cauchy\u2013Schwarz inequality that the absolute value of the Pearson correlation coefficient is not bigger than 1. Therefore, the value of a correlation coefficient ranges between -1 and +1. The correlation coefficient is +1 in the case of a perfect direct (increasing) linear relationship (correlation), \u22121 in the case of a perfect inverse (decreasing) linear relationship (anticorrelation), and some value in the open interval"}, {"text": "It is a corollary of the Cauchy\u2013Schwarz inequality that the absolute value of the Pearson correlation coefficient is not bigger than 1. Therefore, the value of a correlation coefficient ranges between -1 and +1. The correlation coefficient is +1 in the case of a perfect direct (increasing) linear relationship (correlation), \u22121 in the case of a perfect inverse (decreasing) linear relationship (anticorrelation), and some value in the open interval"}, {"text": "In statistics, the Pearson correlation coefficient (PCC, pronounced ), also referred to as Pearson's r, the Pearson product-moment correlation coefficient (PPMCC), or the bivariate correlation, is a measure of linear correlation between two sets of data. It is the covariance of two variables, divided by the product of their standard deviations; thus it is essentially a normalised measurement of the covariance, such that the result always has a value between -1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationship or correlation."}, {"text": "In statistics, the Pearson correlation coefficient (PCC, pronounced ), also referred to as Pearson's r, the Pearson product-moment correlation coefficient (PPMCC), or the bivariate correlation, is a measure of linear correlation between two sets of data. It is the covariance of two variables, divided by the product of their standard deviations; thus it is essentially a normalised measurement of the covariance, such that the result always has a value between -1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationship or correlation."}, {"text": "If a pair (X, Y) of random variables follows a bivariate normal distribution, then the conditional mean E(Y|X) is a linear function of X. The correlation coefficient r between X and Y, along with the marginal means and variances of X and Y, determines this linear relationship:"}, {"text": "Like the correlation coefficient, the partial correlation coefficient takes on a value in the range from \u20131 to 1. The value \u20131 conveys a perfect negative correlation controlling for some variables (that is, an exact linear relationship in which higher values of one variable are associated with lower values of the other); the value 1 conveys a perfect positive linear relationship, and the value 0 conveys that there is no linear relationship."}, {"text": "This is equal to the formula given above. As a correlation coefficient, the Matthews correlation coefficient is the geometric mean of the regression coefficients of the problem and its dual. The component regression coefficients of the Matthews correlation coefficient are Markedness (\u0394p) and Youden's J statistic (Informedness or \u0394p')."}]}, {"question": "How do you evaluate the accuracy of a regression result", "positive_ctxs": [{"text": "The name tells you how to calculate it. You subtract the regression-predicted values from the actual values, square them (to get rid of directionality), take their average, then take the square root of the average."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "DCG uses a graded relevance scale of documents from the result set to evaluate the usefulness, or gain, of a document based on its position in the result list. The premise of DCG is that highly relevant documents appearing lower in a search result list should be penalized as the graded relevance value is reduced logarithmically proportional to the position of the result."}, {"text": "\"You cannot legitimately test a hypothesis on the same data that first suggested that hypothesis. Once you have a hypothesis, design a study to search specifically for the effect you now think is there. If the result of this test is statistically significant, you have real evidence at last.\""}, {"text": "Lasso was introduced in order to improve the prediction accuracy and interpretability of regression models. It selects a reduced set of the known covariates for use in a model."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}]}, {"question": "What is gradient descent in logistic regression", "positive_ctxs": [{"text": "Gradient Descent is the process of minimizing a function by following the gradients of the cost function. This involves knowing the form of the cost as well as the derivative so that from a given point you know the gradient and can move in that direction, e.g. downhill towards the minimum value."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is used in Geophysics, specifically in applications of Full-Waveform Inversion (FWI).Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks.Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for estimating linear regression models, originally under the name ADALINE."}, {"text": "It is used in Geophysics, specifically in applications of Full-Waveform Inversion (FWI).Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks.Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for estimating linear regression models, originally under the name ADALINE."}, {"text": "It is used in Geophysics, specifically in applications of Full-Waveform Inversion (FWI).Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks.Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for estimating linear regression models, originally under the name ADALINE."}, {"text": "It is used in Geophysics, specifically in applications of Full-Waveform Inversion (FWI).Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks.Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for estimating linear regression models, originally under the name ADALINE."}, {"text": "It is used in Geophysics, specifically in applications of Full-Waveform Inversion (FWI).Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks.Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for estimating linear regression models, originally under the name ADALINE."}, {"text": "It is used in Geophysics, specifically in applications of Full-Waveform Inversion (FWI).Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks.Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for estimating linear regression models, originally under the name ADALINE."}, {"text": "It is used in Geophysics, specifically in applications of Full-Waveform Inversion (FWI).Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks.Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for estimating linear regression models, originally under the name ADALINE."}]}, {"question": "How do you find the probability of a Poisson distribution", "positive_ctxs": [{"text": "Poisson Formula. Suppose we conduct a Poisson experiment, in which the average number of successes within a given region is \u03bc. Then, the Poisson probability is: P(x; \u03bc) = (e-\u03bc) (\u03bcx) / x! where x is the actual number of successes that result from the experiment, and e is approximately equal to 2.71828."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The Poisson distribution is a special case of the discrete compound Poisson distribution (or stuttering Poisson distribution) with only a parameter. The discrete compound Poisson distribution can be deduced from the limiting distribution of univariate multinomial distribution. It is also a special case of a compound Poisson distribution."}, {"text": "The probability distribution of the number of fixed points in a uniformly distributed random permutation approaches a Poisson distribution with expected value 1 as n grows. In particular, it is an elegant application of the inclusion\u2013exclusion principle to show that the probability that there are no fixed points approaches 1/e. When n is big enough, the probability distribution of fixed points is almost the Poisson distribution with expected value 1."}, {"text": "In probability theory and statistics, the Poisson distribution (; French pronunciation: \u200b[pwas\u0254\u0303]), named after French mathematician Sim\u00e9on Denis Poisson, is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event. The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume."}, {"text": "In probability theory, one may describe the distribution of a random variable as belonging to a family of probability distributions, distinguished from each other by the values of a finite number of parameters. For example, one talks about \"a Poisson distribution with mean value \u03bb\". The function defining the distribution (the probability mass function) is:"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The word law is sometimes used as a synonym of probability distribution, and convergence in law means convergence in distribution. Accordingly, the Poisson distribution is sometimes called the \"law of small numbers\" because it is the probability distribution of the number of occurrences of an event that happens rarely but has very many opportunities to happen. The Law of Small Numbers is a book by Ladislaus Bortkiewicz about the Poisson distribution, published in 1898."}, {"text": "In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless."}]}, {"question": "What are the advantages of distributed representations", "positive_ctxs": [{"text": "Advantages of distributed representations Mapping efficiency: a micro-feature-based distributed representation often allows a simple mapping (that uses few connections or weights) to solve a task. For example, suppose we wish to classify 100 different colored shapes as to whether or not they are yellow."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Allows partial matchingMost of these advantages are a consequence of the difference in the density of the document collection representation between Boolean and term frequency-inverse document frequency approaches. When using Boolean weights, any document lies in a vertex in a n-dimensional hypercube. Therefore, the possible document representations are"}, {"text": "According to French (1991), catastrophic interference arises in feedforward backpropagation networks due to the interaction of node activations, or activation overlap, that occurs in distributed representations at the hidden layer. Neural networks that employ very localized representations do not show catastrophic interference because of the lack of overlap at the hidden layer. French therefore suggested that reducing the value of activation overlap at the hidden layer would reduce catastrophic interference in distributed networks."}, {"text": "The main cause of catastrophic interference seems to be overlap in the representations at the hidden layer of distributed neural networks. In a distributed representation, each input tends to create changes in the weights of many of the nodes. Catastrophic forgetting occurs because when many of the weights where \"knowledge is stored\" are changed, it is unlikely for prior knowledge to be kept intact."}, {"text": "What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest and most common ones. Weak correlations between successive samples are also often desirable/necessary."}, {"text": "What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest and most common ones. Weak correlations between successive samples are also often desirable/necessary."}, {"text": "The IDA model elucidates the role of consciousness in the updating of perceptual memory, transient episodic memory, and procedural memory. Transient episodic and declarative memories have distributed representations in IDA, there is evidence that this is also the case in the nervous system. In IDA, these two memories are implemented computationally using a modified version of Kanerva\u2019s Sparse distributed memory architecture."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}]}, {"question": "Are the order statistics independent", "positive_ctxs": [{"text": "As Justin Rising points out, the order statistics are clearly not independent of each other. . If the observations are independent and identically distributed from a continuous distribution, then any ordering of the samples is equally likely."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference."}, {"text": "Are they the same or different? These difficulties are responsible for the limited application of Huntley's directed length dimensions to real problems."}, {"text": "Are the state variables discrete or continuous? If they are discrete, do they have only a finite number of possible values?"}, {"text": "Are the state variables discrete or continuous? If they are discrete, do they have only a finite number of possible values?"}, {"text": "It is important to obtain some indication about how generalizable the results are. While this is often difficult to check, one can look at the stability of the results. Are the results reliable and reproducible?"}, {"text": "We assume that the source is producing independent symbols, with possibly different output statistics at each instant. We assume that the statistics of the process are known completely, that is, the marginal distribution of the process seen at each time instant is known. The joint distribution is just the product of marginals."}, {"text": "One reasons in an entirely analogous way to derive the higher-order joint distributions. Perhaps surprisingly, the joint density of the n order statistics turns out to be constant:"}]}, {"question": "How do you calculate decision trees", "positive_ctxs": [{"text": "The value to be gained from taking a decision. Net gain is calculated by adding together the expected value of each outcome and deducting the costs associated with the decision."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Early decision trees were only capable of handling categorical variables, but more recent versions, such as C4.5, do not have this limitation."}, {"text": "Early decision trees were only capable of handling categorical variables, but more recent versions, such as C4.5, do not have this limitation."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "An alternating decision tree (ADTree) is a machine learning method for classification. It generalizes decision trees and has connections to boosting."}]}, {"question": "How can Multicollinearity be reduced", "positive_ctxs": [{"text": "How to Deal with MulticollinearityRemove some of the highly correlated independent variables.Linearly combine the independent variables, such as adding them together.Perform an analysis designed for highly correlated variables, such as principal components analysis or partial least squares regression."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Spectral Regularization is also used to enforce a reduced rank coefficient matrix in multivariate regression. In this setting, a reduced rank coefficient matrix can be found by keeping just the top"}, {"text": "Multicollinearity refers to a situation in which more than two explanatory variables in a multiple regression model are highly linearly related. We have perfect multicollinearity if, for example as in the equation above, the correlation"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "The interactions of neurons in a small network can be often reduced to simple models such as the Ising model. The statistical mechanics of such simple systems are well-characterized theoretically. There has been some recent evidence that suggests that dynamics of arbitrary neuronal networks can be reduced to pairwise interactions."}, {"text": "Multicollinearity may represent a serious issue in survival analysis. The problem is that time-varying covariates may change their value over the time line of the study. A special procedure is recommended to assess the impact of multicollinearity on the results."}, {"text": "How the dimensions of the embedding actually correspond to dimensions of system behavior, however, are not necessarily obvious. Here, a subjective judgment about the correspondence can be made (see perceptual mapping)."}, {"text": "Intuitively, bias is reduced by using only local information, whereas variance can only be reduced by averaging over multiple observations, which inherently means using information from a larger region. For an enlightening example, see the section on k-nearest neighbors or the figure on the right."}]}, {"question": "What is contested concept", "positive_ctxs": [{"text": "Each party in a dispute recognises that its own use of the concept is contested by those of other parties. To use an essentially contested concept means to use it against other users. To use such a concept means to use it aggresssively and defensively."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The disputes that attend an essentially contested concept are driven by substantive disagreements over a range of different, entirely reasonable (although perhaps mistaken) interpretations of a mutually-agreed-upon archetypical notion, such as the legal precept \"treat like cases alike; and treat different cases differently\", with \"each party [continuing] to defend its case with what it claims to be convincing arguments, evidence and other forms of justification\".Gallie speaks of how \"This picture is painted in oils\" can be successfully contested if the work is actually painted in tempera; while \"This picture is a work of art\" may meet strong opposition due to disputes over what \"work of art\" denotes. He suggests three avenues whereby one might resolve such disputes:"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "The contested resource is divided. In essence, this means both conflicting parties display some extent of shift in priorities which then opens up for some form of \"meeting the other side halfway\" agreement."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "So long as contestant users of any essentially contested concept believe, however deludedly, that their own use of it is the only one that can command honest and informed approval, they are likely to persist in the hope that they will ultimately persuade and convert all their opponents by logical means. But once [we] let the truth out of the bag \u2014 i.e., the essential contestedness of the concept in question \u2014 then this harmless if deluded hope may well be replaced by a ruthless decision to cut the cackle, to damn the heretics and to exterminate the unwanted."}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}]}, {"question": "Why do we use cluster sampling", "positive_ctxs": [{"text": "Use. Cluster sampling is typically used in market research. It's used when a researcher can't get information about the population as a whole, but they can get information about the clusters. Cluster sampling is often more economical or more practical than stratified sampling or simple random sampling."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "Indicating a larger expected increase in the variance of the estimator). In other words, the more there is heterogeneity between clusters and more homogeneity between subjects within a cluster, the less accurate are our estimators become. This is because in such cases we are better off sampling as many clusters as we can and making do with a small sample of subjects from within each cluster (i.e."}, {"text": "Indicating a larger expected increase in the variance of the estimator). In other words, the more there is heterogeneity between clusters and more homogeneity between subjects within a cluster, the less accurate are our estimators become. This is because in such cases we are better off sampling as many clusters as we can and making do with a small sample of subjects from within each cluster (i.e."}, {"text": "An example of cluster sampling is area sampling or geographical cluster sampling. Each cluster is a geographical area. Because a geographically dispersed population can be expensive to survey, greater economy than simple random sampling can be achieved by grouping several respondents within a local area into a cluster."}, {"text": "An example of cluster sampling is area sampling or geographical cluster sampling. Each cluster is a geographical area. Because a geographically dispersed population can be expensive to survey, greater economy than simple random sampling can be achieved by grouping several respondents within a local area into a cluster."}, {"text": "The elements in each cluster are then sampled. If all elements in each sampled cluster are sampled, then this is referred to as a \"one-stage\" cluster sampling plan. If a simple random subsample of elements is selected within each of these groups, this is referred to as a \"two-stage\" cluster sampling plan."}]}, {"question": "How do you read a relative frequency density histogram", "positive_ctxs": [{"text": "0:082:33Suggested clip \u00b7 117 secondsHistogram Finding Frequency - Corbettmaths - YouTubeYouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "As the adjacent bins leave no gaps, the rectangles of a histogram touch each other to indicate that the original variable is continuous.Histograms give a rough sense of the density of the underlying distribution of the data, and often for density estimation: estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all 1, then a histogram is identical to a relative frequency plot."}, {"text": "As the adjacent bins leave no gaps, the rectangles of a histogram touch each other to indicate that the original variable is continuous.Histograms give a rough sense of the density of the underlying distribution of the data, and often for density estimation: estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all 1, then a histogram is identical to a relative frequency plot."}, {"text": "A histogram is a representation of tabulated frequencies, shown as adjacent rectangles or squares (in some of situations), erected over discrete intervals (bins), with an area proportional to the frequency of the observations in the interval. The height of a rectangle is also equal to the frequency density of the interval, i.e., the frequency divided by the width of the interval. The total area of the histogram is equal to the number of data."}, {"text": "A histogram is a representation of tabulated frequencies, shown as adjacent rectangles or squares (in some of situations), erected over discrete intervals (bins), with an area proportional to the frequency of the observations in the interval. The height of a rectangle is also equal to the frequency density of the interval, i.e., the frequency divided by the width of the interval. The total area of the histogram is equal to the number of data."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "When plotting the histogram, the frequency density is used for the dependent axis. While all bins have approximately equal area, the heights of the histogram approximate the density distribution."}, {"text": "When plotting the histogram, the frequency density is used for the dependent axis. While all bins have approximately equal area, the heights of the histogram approximate the density distribution."}]}, {"question": "What are the decision boundaries for linear discriminant analysis", "positive_ctxs": [{"text": "It is linear if there exists a function H(x) = \u03b20 + \u03b2T x such that h(x) = I(H(x) > 0). H(x) is also called a linear discriminant function. The decision boundary is therefore defined as the set {x \u2208 Rd : H(x)=0}, which corresponds to a (d \u2212 1)-dimensional hyperplane within the d-dimensional input space X."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis."}, {"text": "Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis."}, {"text": "Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis."}, {"text": "This leads to the framework of regularized discriminant analysis or shrinkage discriminant analysis.Also, in many practical cases linear discriminants are not suitable. LDA and Fisher's discriminant can be extended for use in non-linear classification via the kernel trick. Here, the original observations are effectively mapped into a higher dimensional non-linear space."}, {"text": "This leads to the framework of regularized discriminant analysis or shrinkage discriminant analysis.Also, in many practical cases linear discriminants are not suitable. LDA and Fisher's discriminant can be extended for use in non-linear classification via the kernel trick. Here, the original observations are effectively mapped into a higher dimensional non-linear space."}, {"text": "This leads to the framework of regularized discriminant analysis or shrinkage discriminant analysis.Also, in many practical cases linear discriminants are not suitable. LDA and Fisher's discriminant can be extended for use in non-linear classification via the kernel trick. Here, the original observations are effectively mapped into a higher dimensional non-linear space."}, {"text": "This leads to the framework of regularized discriminant analysis or shrinkage discriminant analysis.Also, in many practical cases linear discriminants are not suitable. LDA and Fisher's discriminant can be extended for use in non-linear classification via the kernel trick. Here, the original observations are effectively mapped into a higher dimensional non-linear space."}]}, {"question": "What are the properties of good knowledge representation techniques", "positive_ctxs": [{"text": "A good knowledge representation system must have properties such as: Representational Accuracy: It should represent all kinds of required knowledge. Inferential Adequacy: It should be able to manipulate the representational structures to produce new knowledge corresponding to the existing structure."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the underlying framework used to represent knowledge? Semantic networks were one of the first knowledge representation primitives. Also, data structures and algorithms for general fast search."}, {"text": "Knowledge representation goes hand in hand with automated reasoning because one of the main purposes of explicitly representing knowledge is to be able to reason about that knowledge, to make inferences, assert new knowledge, etc. Virtually all knowledge representation languages have a reasoning or inference engine as part of the system.A key trade-off in the design of a knowledge representation formalism is that between expressivity and practicality. The ultimate knowledge representation formalism in terms of expressive power and compactness is First Order Logic (FOL)."}, {"text": "Fitness is measured by scoring the output from the functions of the Lisp code. Similar analogues between the tree structured lisp representation and the representation of grammars as trees, made the application of genetic programming techniques possible for grammar induction."}, {"text": "But the laws of thermodynamics, combined with the values of the specifying extensive variables of state, are not sufficient to provide knowledge of those nominal values. Further information is needed, namely, of the constitutive properties of the system."}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "Other prominent nonlinear techniques include manifold learning techniques such as Isomap, locally linear embedding (LLE), Hessian LLE, Laplacian eigenmaps, and methods based on tangent space analysis. These techniques construct a low-dimensional data representation using a cost function that retains local properties of the data, and can be viewed as defining a graph-based kernel for Kernel PCA."}, {"text": "Other prominent nonlinear techniques include manifold learning techniques such as Isomap, locally linear embedding (LLE), Hessian LLE, Laplacian eigenmaps, and methods based on tangent space analysis. These techniques construct a low-dimensional data representation using a cost function that retains local properties of the data, and can be viewed as defining a graph-based kernel for Kernel PCA."}]}, {"question": "What is reinforcement learning example", "positive_ctxs": [{"text": "The example of reinforcement learning is your cat is an agent that is exposed to the environment. The biggest characteristic of this method is that there is no supervisor, only a real number or reward signal. Two types of reinforcement learning are 1) Positive 2) Negative."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Due to its generality, reinforcement learning is studied in many disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, and statistics. In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment."}, {"text": "Due to its generality, reinforcement learning is studied in many disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, and statistics. In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment."}, {"text": "Another application of MDP process in machine learning theory is called learning automata. This is also one type of reinforcement learning if the environment is stochastic. The first detail learning automata paper is surveyed by Narendra and Thathachar (1974), which were originally described explicitly as finite state automata."}, {"text": "Deep reinforcement learning (deep RL) is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of state space."}, {"text": "The recommendation problem can be seen as a special instance of a reinforcement learning problem whereby the user is the environment upon which the agent, the recommendation system acts upon in order to receive a reward, for instance, a click or engagement by the user. One aspect of reinforcement learning that is of particular use in the area of recommender systems is the fact that the models or policies can be learned by providing a reward to the recommendation agent. This is in contrast to traditional learning techniques which rely on supervised learning approaches that are less flexible, reinforcement learning recommendation techniques allow to potentially train models that can be optimized directly on metrics of engagement, and user interest."}, {"text": "Error-driven learning is a sub-area of machine learning concerned with how an agent ought to take actions in an environment so as to minimize some error feedback. It is a type of reinforcement learning."}]}, {"question": "What category of machine learning algorithm finds patterns in the data when the data is not labeled", "positive_ctxs": [{"text": "Unsupervised Learning is the second type of machine learning, in which unlabeled data are used to train the algorithm, which means it used against data that has no historical labels."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Unsupervised learning, on the other hand, assumes training data that has not been hand-labeled, and attempts to find inherent patterns in the data that can then be used to determine the correct output value for new data instances. A combination of the two that has recently been explored is semi-supervised learning, which uses a combination of labeled and unlabeled data (typically a small set of labeled data combined with a large amount of unlabeled data). Note that in cases of unsupervised learning, there may be no training data at all to speak of; in other words, the data to be labeled is the training data."}, {"text": "Unsupervised learning, on the other hand, assumes training data that has not been hand-labeled, and attempts to find inherent patterns in the data that can then be used to determine the correct output value for new data instances. A combination of the two that has recently been explored is semi-supervised learning, which uses a combination of labeled and unlabeled data (typically a small set of labeled data combined with a large amount of unlabeled data). Note that in cases of unsupervised learning, there may be no training data at all to speak of; in other words, the data to be labeled is the training data."}, {"text": "Not all patterns found by data mining algorithms are necessarily valid. It is common for data mining algorithms to find patterns in the training set which are not present in the general data set. To overcome this, the evaluation uses a test set of data on which the data mining algorithm was not trained."}, {"text": "Stability, also known as algorithmic stability, is a notion in computational learning theory of how a machine learning algorithm is perturbed by small changes to its inputs. A stable learning algorithm is one for which the prediction does not change much when the training data is modified slightly. For instance, consider a machine learning algorithm that is being trained to recognize handwritten letters of the alphabet, using 1000 examples of handwritten letters and their labels (\"A\" to \"Z\") as a training set."}, {"text": "The difference between data analysis and data mining is that data analysis is used to test models and hypotheses on the dataset, e.g., analyzing the effectiveness of a marketing campaign, regardless of the amount of data; in contrast, data mining uses machine learning and statistical models to uncover clandestine or hidden patterns in a large volume of data.The related terms data dredging, data fishing, and data snooping refer to the use of data mining methods to sample parts of a larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered. These methods can, however, be used in creating new hypotheses to test against the larger data populations."}, {"text": "Self-training is a wrapper method for semi-supervised learning. First a supervised learning algorithm is trained based on the labeled data only. This classifier is then applied to the unlabeled data to generate more labeled examples as input for the supervised learning algorithm."}, {"text": "Self-training is a wrapper method for semi-supervised learning. First a supervised learning algorithm is trained based on the labeled data only. This classifier is then applied to the unlabeled data to generate more labeled examples as input for the supervised learning algorithm."}]}, {"question": "What type of deep learning models are best suited for image recognition", "positive_ctxs": [{"text": "Convolutional Neural Networks (CNNs) is the most popular neural network model being used for image classification problem. The big idea behind CNNs is that a local understanding of an image is good enough."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The promise of using deep learning tools in reinforcement learning is generalization: the ability to operate correctly on previously unseen inputs. For instance, neural networks trained for image recognition can recognize that a picture contains a bird even it has never seen that particular image or even that particular bird. Since deep RL allows raw data (e.g."}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "Most modern deep learning models are based on artificial neural networks, specifically convolutional neural networks (CNN)s, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks and deep Boltzmann machines.In deep learning, each level learns to transform its input data into a slightly more abstract and composite representation. In an image recognition application, the raw input may be a matrix of pixels; the first representational layer may abstract the pixels and encode edges; the second layer may compose and encode arrangements of edges; the third layer may encode a nose and eyes; and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place in which level on its own."}, {"text": "Most modern deep learning models are based on artificial neural networks, specifically convolutional neural networks (CNN)s, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks and deep Boltzmann machines.In deep learning, each level learns to transform its input data into a slightly more abstract and composite representation. In an image recognition application, the raw input may be a matrix of pixels; the first representational layer may abstract the pixels and encode edges; the second layer may compose and encode arrangements of edges; the third layer may encode a nose and eyes; and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place in which level on its own."}, {"text": "Most modern deep learning models are based on artificial neural networks, specifically convolutional neural networks (CNN)s, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks and deep Boltzmann machines.In deep learning, each level learns to transform its input data into a slightly more abstract and composite representation. In an image recognition application, the raw input may be a matrix of pixels; the first representational layer may abstract the pixels and encode edges; the second layer may compose and encode arrangements of edges; the third layer may encode a nose and eyes; and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place in which level on its own."}, {"text": "Most modern deep learning models are based on artificial neural networks, specifically convolutional neural networks (CNN)s, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks and deep Boltzmann machines.In deep learning, each level learns to transform its input data into a slightly more abstract and composite representation. In an image recognition application, the raw input may be a matrix of pixels; the first representational layer may abstract the pixels and encode edges; the second layer may compose and encode arrangements of edges; the third layer may encode a nose and eyes; and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place in which level on its own."}, {"text": "Most modern deep learning models are based on artificial neural networks, specifically convolutional neural networks (CNN)s, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks and deep Boltzmann machines.In deep learning, each level learns to transform its input data into a slightly more abstract and composite representation. In an image recognition application, the raw input may be a matrix of pixels; the first representational layer may abstract the pixels and encode edges; the second layer may compose and encode arrangements of edges; the third layer may encode a nose and eyes; and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place in which level on its own."}]}, {"question": "How do I start preparing for data structures and algorithms", "positive_ctxs": [{"text": "Pre-Interview PreparationDevelop a deep knowledge of data structures. You should understand and be able to talk about different data structures and their strengths, weaknesses, and how they compare to each other. Understand Big O notation. Know the major sorting algorithms."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "When storing and manipulating sparse matrices on a computer, it is beneficial and often necessary to use specialized algorithms and data structures that take advantage of the sparse structure of the matrix. Specialized computers have been made for sparse matrices, as they are common in the machine learning field. Operations using standard dense-matrix structures and algorithms are slow and inefficient when applied to large sparse matrices as processing and memory are wasted on the zeros."}, {"text": "What is the underlying framework used to represent knowledge? Semantic networks were one of the first knowledge representation primitives. Also, data structures and algorithms for general fast search."}, {"text": "The strategy to find an order statistic in sublinear time is to store the data in an organized fashion using suitable data structures that facilitate the selection. Two such data structures are tree-based structures and frequency tables."}, {"text": "To compare competing statistics for small samples under realistic data conditions. Although type I error and power properties of statistics can be calculated for data drawn from classical theoretical distributions (e.g., normal curve, Cauchy distribution) for asymptotic conditions (i. e, infinite sample size and infinitesimally small treatment effect), real data often do not have such distributions."}, {"text": "To compare competing statistics for small samples under realistic data conditions. Although type I error and power properties of statistics can be calculated for data drawn from classical theoretical distributions (e.g., normal curve, Cauchy distribution) for asymptotic conditions (i. e, infinite sample size and infinitesimally small treatment effect), real data often do not have such distributions."}, {"text": "I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs."}]}, {"question": "Why do we use sigmoid function", "positive_ctxs": [{"text": "The main reason why we use sigmoid function is because it exists between (0 to 1). Therefore, it is especially used for models where we have to predict the probability as an output. Since probability of anything exists only between the range of 0 and 1, sigmoid is the right choice. The function is differentiable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A sigmoid function is a mathematical function having a characteristic \"S\"-shaped curve or sigmoid curve. A common example of a sigmoid function is the logistic function shown in the first figure and defined by the formula:"}, {"text": "A sigmoid function is a mathematical function having a characteristic \"S\"-shaped curve or sigmoid curve. A common example of a sigmoid function is the logistic function shown in the first figure and defined by the formula:"}, {"text": "A sigmoid function is a mathematical function having a characteristic \"S\"-shaped curve or sigmoid curve. A common example of a sigmoid function is the logistic function shown in the first figure and defined by the formula:"}, {"text": "A sigmoid function is a bounded, differentiable, real function that is defined for all real input values and has a non-negative derivative at each point and exactly one inflection point. A sigmoid \"function\" and a sigmoid \"curve\" refer to the same object."}, {"text": "A sigmoid function is a bounded, differentiable, real function that is defined for all real input values and has a non-negative derivative at each point and exactly one inflection point. A sigmoid \"function\" and a sigmoid \"curve\" refer to the same object."}, {"text": "A sigmoid function is a bounded, differentiable, real function that is defined for all real input values and has a non-negative derivative at each point and exactly one inflection point. A sigmoid \"function\" and a sigmoid \"curve\" refer to the same object."}, {"text": "These metaphors are prevalent in communication and we do not just use them in language; we actually perceive and act in accordance with the metaphors."}]}, {"question": "What does streaming data mean", "positive_ctxs": [{"text": "Streaming Data is data that is generated continuously by thousands of data sources, which typically send in the data records simultaneously, and in small sizes (order of Kilobytes)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "variables, no discretization procedure is necessary. This method is applicable to stationary streaming data as well as large data sets. For non-stationary streaming data, where the Spearman's rank correlation coefficient may change over time, the same procedure can be applied, but to a moving window of observations."}, {"text": "Spark Streaming uses Spark Core's fast scheduling capability to perform streaming analytics. It ingests data in mini-batches and performs RDD transformations on those mini-batches of data. This design enables the same set of application code written for batch analytics to be used in streaming analytics, thus facilitating easy implementation of lambda architecture."}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "By early 2013 Billboard had announced that it was factoring YouTube streaming data into calculation of the Billboard Hot 100 and related genre charts."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The second approach to approximating the Spearman's rank correlation coefficient from streaming data involves the use of Hermite series based estimators. These estimators, based on Hermite polynomials,"}, {"text": "Data wrangling is a superset of data mining and requires processes that some data mining uses, but not always. The process of data mining is to find patterns within large data sets, where data wrangling transforms data in order to deliver insights about that data. Even though data wrangling is a superset of data mining does not mean that data mining does not use it, there are many use cases for data wrangling in data mining."}]}, {"question": "What is the purpose of mean median mode and range", "positive_ctxs": [{"text": "- Mode-The most repetitive number! - Median:The number in the MIDDLE when they are IN ORDER! - Mean- The AVERAGE OF ALL NUMBERS: You add up all the numbers then you divide it by the TOTAL NUMBER of NUMBERS! - Range - THE BIGGEST minus the Smallest!"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If 1 < \u03b1 < \u03b2 then mode \u2264 median \u2264 mean. Expressing the mode (only for \u03b1, \u03b2 > 1), and the mean in terms of \u03b1 and \u03b2:"}, {"text": "Unlike the mode and the mean which have readily calculable formulas based on the parameters, the median does not have a closed-form equation. The median for this distribution is defined as the value"}, {"text": "By contrast, the median income is the level at which half the population is below and half is above. The mode income is the most likely income and favors the larger number of people with lower incomes. While the median and mode are often more intuitive measures for such skewed data, many skewed distributions are in fact best described by their mean, including the exponential and Poisson distributions."}, {"text": "By contrast, the median income is the level at which half the population is below and half is above. The mode income is the most likely income and favors the larger number of people with lower incomes. While the median and mode are often more intuitive measures for such skewed data, many skewed distributions are in fact best described by their mean, including the exponential and Poisson distributions."}, {"text": "By contrast, the median income is the level at which half the population is below and half is above. The mode income is the most likely income and favors the larger number of people with lower incomes. While the median and mode are often more intuitive measures for such skewed data, many skewed distributions are in fact best described by their mean, including the exponential and Poisson distributions."}, {"text": "By contrast, the median income is the level at which half the population is below and half is above. The mode income is the most likely income and favors the larger number of people with lower incomes. While the median and mode are often more intuitive measures for such skewed data, many skewed distributions are in fact best described by their mean, including the exponential and Poisson distributions."}, {"text": "By contrast, the median income is the level at which half the population is below and half is above. The mode income is the most likely income and favors the larger number of people with lower incomes. While the median and mode are often more intuitive measures for such skewed data, many skewed distributions are in fact best described by their mean, including the exponential and Poisson distributions."}]}, {"question": "What is the formula for the standard deviation of the sampling distribution of the sample mean X", "positive_ctxs": [{"text": "The standard deviation of the sample mean \u02c9X that we have just computed is the standard deviation of the population divided by the square root of the sample size: \u221a10=\u221a20/\u221a2."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "In particular, the standard error of a sample statistic (such as sample mean) is the actual or estimated standard deviation of the sample mean in the process by which it was generated. In other words, it is the actual or estimated standard deviation of the sampling distribution of the sample statistic. The notation for standard error can be any one of SE, SEM (for standard error of measurement or mean), or SE."}, {"text": "In particular, the standard error of a sample statistic (such as sample mean) is the actual or estimated standard deviation of the sample mean in the process by which it was generated. In other words, it is the actual or estimated standard deviation of the sampling distribution of the sample statistic. The notation for standard error can be any one of SE, SEM (for standard error of measurement or mean), or SE."}, {"text": "In particular, the standard error of a sample statistic (such as sample mean) is the actual or estimated standard deviation of the sample mean in the process by which it was generated. In other words, it is the actual or estimated standard deviation of the sampling distribution of the sample statistic. The notation for standard error can be any one of SE, SEM (for standard error of measurement or mean), or SE."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}]}, {"question": "How do you tell the difference between correlation and causation", "positive_ctxs": [{"text": "Causation explicitly applies to cases where action A {quote:right}Causation explicitly applies to cases where action A causes outcome B. {/quote} causes outcome B. On the other hand, correlation is simply a relationship. Action A relates to Action B\u2014but one event doesn't necessarily cause the other event to happen."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The Kerby simple difference formula computes the rank-biserial correlation from the common language effect size. Letting f be the proportion of pairs favorable to the hypothesis (the common language effect size), and letting u be the proportion of pairs not favorable, the rank-biserial r is the simple difference between the two proportions: r = f \u2212 u. In other words, the correlation is the difference between the common language effect size and its complement."}, {"text": "In statistics, the range of a set of data is the difference between the largest and smallest values. It can give you a rough idea of how the outcome of the data set will be before you look at it actually"}, {"text": "There is a simple difference formula to compute the rank-biserial correlation from the common language effect size: the correlation is the difference between the proportion of pairs favorable to the hypothesis (f) minus its complement (i.e. : the proportion that is unfavorable (u)). This simple difference formula is just the difference of the common language effect size of each group, and is as follows:"}, {"text": "There is a simple difference formula to compute the rank-biserial correlation from the common language effect size: the correlation is the difference between the proportion of pairs favorable to the hypothesis (f) minus its complement (i.e. : the proportion that is unfavorable (u)). This simple difference formula is just the difference of the common language effect size of each group, and is as follows:"}, {"text": "Formally, the partial correlation between X and Y given a set of n controlling variables Z = {Z1, Z2, ..., Zn}, written \u03c1XY\u00b7Z, is the correlation between the residuals eX and eY resulting from the linear regression of X with Z and of Y with Z, respectively. The first-order partial correlation (i.e., when n = 1) is the difference between a correlation and the product of the removable correlations divided by the product of the coefficients of alienation of the removable correlations. The coefficient of alienation, and its relation with joint variance through correlation are available in Guilford (1973, pp."}, {"text": "If the difference between the previous threshold value and the new threshold value are below a specified limit, you are finished. Otherwise apply the new threshold to the original image keep trying."}]}, {"question": "What is the contribution to the chi square statistic", "positive_ctxs": [{"text": "Categories with a large difference between observed and expected values make a larger contribution to the overall chi-square statistic. In these results, the contribution values from each category sum to the overall chi-square statistic, which is 0.65."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Alternatively, when assessing the contribution of individual predictors in a given model, one may examine the significance of the Wald statistic. The Wald statistic, analogous to the t-test in linear regression, is used to assess the significance of coefficients. The Wald statistic is the ratio of the square of the regression coefficient to the square of the standard error of the coefficient and is asymptotically distributed as a chi-square distribution."}, {"text": "Alternatively, when assessing the contribution of individual predictors in a given model, one may examine the significance of the Wald statistic. The Wald statistic, analogous to the t-test in linear regression, is used to assess the significance of coefficients. The Wald statistic is the ratio of the square of the regression coefficient to the square of the standard error of the coefficient and is asymptotically distributed as a chi-square distribution."}, {"text": "Alternatively, when assessing the contribution of individual predictors in a given model, one may examine the significance of the Wald statistic. The Wald statistic, analogous to the t-test in linear regression, is used to assess the significance of coefficients. The Wald statistic is the ratio of the square of the regression coefficient to the square of the standard error of the coefficient and is asymptotically distributed as a chi-square distribution."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}]}, {"question": "What is a statistical hypothesis example", "positive_ctxs": [{"text": "A statistical hypothesis is a formal claim about a state of nature structured within the framework of a statistical model. For example, one could claim that the median time to failure from (acce]erated) electromigration of the chip population described in Section 6.1."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For some of the above problems, it may also be interesting to ask about statistical significance. What is the probability that a sequence drawn from some null distribution will have an HMM probability (in the case of the forward algorithm) or a maximum state sequence probability (in the case of the Viterbi algorithm) at least as large as that of a particular output sequence? When an HMM is used to evaluate the relevance of a hypothesis for a particular output sequence, the statistical significance indicates the false positive rate associated with failing to reject the hypothesis for the output sequence."}, {"text": "In statistical hypothesis testing, a type I error is the rejection of a true null hypothesis (also known as a \"false positive\" finding or conclusion; example: \"an innocent person is convicted\"), while a type II error is the non-rejection of a false null hypothesis (also known as a \"false negative\" finding or conclusion; example: \"a guilty person is not convicted\"). Much of statistical theory revolves around the minimization of one or both of these errors, though the complete elimination of either is a statistical impossibility for non-deterministic algorithms."}, {"text": "In statistical hypothesis testing, a type I error is the rejection of a true null hypothesis (also known as a \"false positive\" finding or conclusion; example: \"an innocent person is convicted\"), while a type II error is the non-rejection of a false null hypothesis (also known as a \"false negative\" finding or conclusion; example: \"a guilty person is not convicted\"). Much of statistical theory revolves around the minimization of one or both of these errors, though the complete elimination of either is a statistical impossibility for non-deterministic algorithms."}, {"text": "In statistical hypothesis testing, a type I error is the rejection of a true null hypothesis (also known as a \"false positive\" finding or conclusion; example: \"an innocent person is convicted\"), while a type II error is the non-rejection of a false null hypothesis (also known as a \"false negative\" finding or conclusion; example: \"a guilty person is not convicted\"). Much of statistical theory revolves around the minimization of one or both of these errors, though the complete elimination of either is a statistical impossibility for non-deterministic algorithms."}, {"text": "In statistical hypothesis testing, a type I error is the rejection of a true null hypothesis (also known as a \"false positive\" finding or conclusion; example: \"an innocent person is convicted\"), while a type II error is the non-rejection of a false null hypothesis (also known as a \"false negative\" finding or conclusion; example: \"a guilty person is not convicted\"). Much of statistical theory revolves around the minimization of one or both of these errors, though the complete elimination of either is a statistical impossibility for non-deterministic algorithms."}, {"text": "In statistical hypothesis testing, a type I error is the rejection of a true null hypothesis (also known as a \"false positive\" finding or conclusion; example: \"an innocent person is convicted\"), while a type II error is the non-rejection of a false null hypothesis (also known as a \"false negative\" finding or conclusion; example: \"a guilty person is not convicted\"). Much of statistical theory revolves around the minimization of one or both of these errors, though the complete elimination of either is a statistical impossibility for non-deterministic algorithms."}, {"text": "in some study is called a statistical hypothesis. If we state one hypothesis only and the aim of the statistical test is to see whether this hypothesis is tenable, but not, at the same time, to investigate other hypotheses, then such a test is called a significance test. Note that the hypothesis might specify the probability distribution of"}]}, {"question": "Why do we need calibration in machine learning", "positive_ctxs": [{"text": "In this blog we will learn what is calibration and why and when we should use it. We calibrate our model when the probability estimate of a data point belonging to a class is very important. Calibration is comparison of the actual output and the expected output given by a system."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "\u201c\u2026prose to describe an algorithm, ignoring the implementation details. At this level, we do not need to mention how the machine manages its tape or head.\""}, {"text": "In machine learning, one-hot encoding is a frequently used method to deal with categorical data. Because many machine learning models need their input variables to be numeric, categorical variables need to be transformed in the pre-processing part."}, {"text": "Then we might wish to sample them more frequently than their prevalence in the population. For example, suppose there is a disease that affects 1 person in 10,000 and to collect our data we need to do a complete physical. It may be too expensive to do thousands of physicals of healthy people in order to obtain data for only a few diseased individuals."}, {"text": "Then we might wish to sample them more frequently than their prevalence in the population. For example, suppose there is a disease that affects 1 person in 10,000 and to collect our data we need to do a complete physical. It may be too expensive to do thousands of physicals of healthy people in order to obtain data for only a few diseased individuals."}, {"text": "Then we might wish to sample them more frequently than their prevalence in the population. For example, suppose there is a disease that affects 1 person in 10,000 and to collect our data we need to do a complete physical. It may be too expensive to do thousands of physicals of healthy people in order to obtain data for only a few diseased individuals."}, {"text": "To do that, we need to perform the relevant integration by substitution: thus, we need to multiply by the derivative of the (natural) logarithm function, which is 1/y. Hence, the transformed distribution has the following probability density function:"}, {"text": "To do this in the ideal case, for all the adults in the population we would need to know whether they (a) had the exposure to the injury as children and (b) whether they developed the disease as adults. From this we would extract the following information: the total number of people exposed to the childhood injury,"}]}, {"question": "How do you explain standard deviation in statistics", "positive_ctxs": [{"text": "Standard deviation (represented by the symbol sigma, \u03c3 ) shows how much variation or dispersion exists from the average (mean), or expected value. More precisely, it is a measure of the average distance between the values of the data in the set and the mean."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Bimodal distributions are a commonly used example of how summary statistics such as the mean, median, and standard deviation can be deceptive when used on an arbitrary distribution. For example, in the distribution in Figure 1, the mean and median would be about zero, even though zero is not a typical value. The standard deviation is also larger than deviation of each normal distribution."}, {"text": "MultiSURF* extends the SURF* algorithm adapting the near/far neighborhood boundaries based on the average and standard deviation of distances from the target instance to all others. MultiSURF* uses the standard deviation to define a dead-band zone where 'middle-distance' instances do not contribute to scoring. Evidence suggests MultiSURF* performs best in detecting pure 2-way feature interactions."}, {"text": "The mean and the standard deviation of a set of data are descriptive statistics usually reported together. In a certain sense, the standard deviation is a \"natural\" measure of statistical dispersion if the center of the data is measured about the mean. This is because the standard deviation from the mean is smaller than from any other point."}, {"text": "The mean and the standard deviation of a set of data are descriptive statistics usually reported together. In a certain sense, the standard deviation is a \"natural\" measure of statistical dispersion if the center of the data is measured about the mean. This is because the standard deviation from the mean is smaller than from any other point."}, {"text": "It is a common practice to use a one-tailed hypothesis by default. However, \"If you do not have a specific direction firmly in mind in advance, use a two-sided alternative. Moreover, some users of statistics argue that we should always work with the two-sided alternative."}, {"text": "Descriptive statistics can be used to summarize the population data. Numerical descriptors include mean and standard deviation for continuous data (like income), while frequency and percentage are more useful in terms of describing categorical data (like education)."}]}, {"question": "What is the difference between Q learning and Sarsa", "positive_ctxs": [{"text": "So the difference is in the way the future reward is found. In Q-learning it's simply the highest possible action that can be taken from state 2, and in SARSA it's the value of the actual action that was taken."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Given two points P and Q on C, let s(P,Q) be the arc length of the portion of the curve between P and Q and let d(P,Q) denote the length of the line segment from P to Q. The curvature of C at P is given by the limit"}, {"text": "The DeepMind system used a deep convolutional neural network, with layers of tiled convolutional filters to mimic the effects of receptive fields. Reinforcement learning is unstable or divergent when a nonlinear function approximator such as a neural network is used to represent Q. This instability comes from the correlations present in the sequence of observations, the fact that small updates to Q may significantly change the policy and the data distribution, and the correlations between Q and the target values."}, {"text": "In psychophysical terms, the size difference between A and C is above the just noticeable difference ('jnd') while the size differences between A and B and B and C are below the jnd."}, {"text": "The perceptron learning rule originates from the Hebbian assumption, and was used by Frank Rosenblatt in his perceptron in 1958. The net is passed to the activation (transfer) function and the function's output is used for adjusting the weights. The learning signal is the difference between the desired response and the actual response of a neuron."}, {"text": "Select a random subset Q of [n] containing m elements and a random permutation, and ask about the probability that all elements of Q lie on the same cycle. This is another average parameter. The function b(k) is equal to"}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}]}, {"question": "What does fixed effect mean in statistics", "positive_ctxs": [{"text": "Fixed effects are variables that are constant across individuals; these variables, like age, sex, or ethnicity, don't change or change at a constant rate over time. They have fixed effects; in other words, any change they cause to an individual is the same."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "The doctrine of satkaryavada affirms that the effect inheres in the cause in some way. The effect is thus either a real or apparent modification of the cause. The doctrine of asatkaryavada affirms that the effect does not inhere in the cause, but is a new arising."}, {"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}]}, {"question": "How do I know if my data is Poisson distributed", "positive_ctxs": [{"text": "The number of outcomes in non-overlapping intervals are independent. The probability of two or more outcomes in a sufficiently short interval is virtually zero. The probability of exactly one outcome in a sufficiently short interval or small region is proportional to the length of the interval or region."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "\"It is not the case that if it is raining then I wear my coat. \", or equivalently, \"Sometimes, when it is raining, I don't wear my coat. \" If the negation is true, then the original proposition (and by extension the contrapositive) is false.Note that if"}, {"text": "Nobody is sorrier than me that the police officer had to spend his valuable time writing out a parking ticket on my car. Though from my personal standpoint I know for a certainty that the meter had not yet expired, please accept my expression of deep regret at this unfortunate incident."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Syntactic or structural ambiguities are frequently found in humor and advertising. One of the most enduring jokes from the famous comedian Groucho Marx was his quip that used a modifier attachment ambiguity: \"I shot an elephant in my pajamas. How he got into my pajamas I don't know.\""}, {"text": "I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs."}, {"text": "I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs."}, {"text": "The likelihood-ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e. on what probability of Type I error is considered tolerable (Type I errors consist of the rejection of a null hypothesis that is true)."}]}, {"question": "What does Bayesian networks mean in Machine Learning", "positive_ctxs": [{"text": "A Bayesian network (also known as a Bayes network, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Efficient algorithms can perform inference and learning in Bayesian networks."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In February 2017, IBM announced the first Machine Learning Hub in Silicon Valley to share expertise and teach companies about machine learning and data science In April 2017 they expanded to Toronto, Beijing, and Stuttgart. A fifth Machine Learning Hub was created in August 2017 in India, Bongalore."}, {"text": "Bifet, Albert; Gavald\u00e0, Ricard; Holmes, Geoff; Pfahringer, Bernhard (2018). Machine Learning for Data Streams with Practical Examples in MOA. Adaptive Computation and Machine Learning."}, {"text": "Note that SRL is sometimes called Relational Machine Learning (RML) in the literature. Typically, the knowledge representation formalisms developed in SRL use (a subset of) first-order logic to describe relational properties of a domain in a general manner (universal quantification) and draw upon probabilistic graphical models (such as Bayesian networks or Markov networks) to model the uncertainty; some also build upon the methods of inductive logic programming. Significant contributions to the field have been made since the late 1990s."}, {"text": "Efficient algorithms can perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (e.g. speech signals or protein sequences) are called dynamic Bayesian networks."}, {"text": "Efficient algorithms can perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (e.g. speech signals or protein sequences) are called dynamic Bayesian networks."}, {"text": "Although Bayesian networks are often used to represent causal relationships, this need not be the case: a directed edge from u to v does not require that Xv be causally dependent on Xu. This is demonstrated by the fact that Bayesian networks on the graphs:"}, {"text": "Although Bayesian networks are often used to represent causal relationships, this need not be the case: a directed edge from u to v does not require that Xv be causally dependent on Xu. This is demonstrated by the fact that Bayesian networks on the graphs:"}]}, {"question": "What is validation in machine learning", "positive_ctxs": [{"text": "Definition. In machine learning, model validation is referred to as the process where a trained model is evaluated with a testing data set. The testing data set is a separate portion of the same data set from which the training set is derived. Model validation is carried out after model training."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "Why do we need edge detection", "positive_ctxs": [{"text": "Edge detection is an image processing technique for finding the boundaries of objects within images. It works by detecting discontinuities in brightness. Edge detection is used for image segmentation and data extraction in areas such as image processing, computer vision, and machine vision."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "is negative, then the process favors changes in sign between terms of the process. This can be likened to edge detection or detection of change in direction."}, {"text": "is negative, then the process favors changes in sign between terms of the process. This can be likened to edge detection or detection of change in direction."}, {"text": "\"Marvin Minsky writes \"This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence? \"Nick Bostrom observes that \"A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore.\""}, {"text": "Specific applications, like step detection and edge detection, may be concerned with changes in the mean, variance, correlation, or spectral density of the process. More generally change detection also includes the detection of anomalous behavior: anomaly detection."}, {"text": "To do that, we need to perform the relevant integration by substitution: thus, we need to multiply by the derivative of the (natural) logarithm function, which is 1/y. Hence, the transformed distribution has the following probability density function:"}, {"text": "It is important for a casino to know both the house edge and variance for all of their games. The house edge tells them what kind of profit they will make as percentage of turnover, and the variance tells them how much they need in the way of cash reserves. The mathematicians and computer programmers that do this kind of work are called gaming mathematicians and gaming analysts."}, {"text": "The DoG function will have strong responses along edges, even if the candidate keypoint is not robust to small amounts of noise. Therefore, in order to increase stability, we need to eliminate the keypoints that have poorly determined locations but have high edge responses."}]}, {"question": "Is the P value the same as Alpha", "positive_ctxs": [{"text": "Using P values and Significance Levels Together If your P value is less than or equal to your alpha level, reject the null hypothesis. The P value results are consistent with our graphical representation. The P value of 0.03112 is significant at the alpha level of 0.05 but not 0.01."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "of a list of N ordered values (sorted from least to greatest) is the smallest value in the list such that no more than P percent of the data is strictly less than the value and at least P percent of the data is less than or equal to that value. This is obtained by first calculating the ordinal rank and then taking the value from the ordered list that corresponds to that rank. The ordinal rank n is calculated using this formula"}, {"text": "Consequential \u2013 What are the potential risks if the scores are invalid or inappropriately interpreted? Is the test still worthwhile given the risks?"}, {"text": "Consequential \u2013 What are the potential risks if the scores are invalid or inappropriately interpreted? Is the test still worthwhile given the risks?"}, {"text": "Given two points P and Q on C, let s(P,Q) be the arc length of the portion of the curve between P and Q and let d(P,Q) denote the length of the line segment from P to Q. The curvature of C at P is given by the limit"}, {"text": "Is the yield of good cookies affected by the baking temperature and time in the oven? The table shows data for 8 batches of cookies."}]}, {"question": "Are generalized linear models statistical methods or machine learning methods", "positive_ctxs": [{"text": "A GLM is absolutely a statistical model, but statistical models and machine learning techniques are not mutually exclusive. In general, statistics is more concerned with inferring parameters, whereas in machine learning, prediction is the ultimate goal."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Therefore rule-based machine learning methods typically comprise a set of rules, or knowledge base, that collectively make up the prediction model."}, {"text": "Subgradient methods - An iterative method for large locally Lipschitz functions using generalized gradients. Following Boris T. Polyak, subgradient\u2013projection methods are similar to conjugate\u2013gradient methods."}, {"text": "Though originally defined for linear regression, lasso regularization is easily extended to other statistical models including generalized linear models, generalized estimating equations, proportional hazards models, and M-estimators. Lasso\u2019s ability to perform subset selection relies on the form of the constraint and has a variety of interpretations including in terms of geometry, Bayesian statistics and convex analysis."}, {"text": "A possible point of confusion has to do with the distinction between generalized linear models and general linear models, two broad statistical models. Co-originator John Nelder has expressed regret over this terminology.The general linear model may be viewed as a special case of the generalized linear model with identity link and responses normally distributed. As most exact results of interest are obtained only for the general linear model, the general linear model has undergone a somewhat longer historical development."}, {"text": "A possible point of confusion has to do with the distinction between generalized linear models and general linear models, two broad statistical models. Co-originator John Nelder has expressed regret over this terminology.The general linear model may be viewed as a special case of the generalized linear model with identity link and responses normally distributed. As most exact results of interest are obtained only for the general linear model, the general linear model has undergone a somewhat longer historical development."}, {"text": "The difference between data analysis and data mining is that data analysis is used to test models and hypotheses on the dataset, e.g., analyzing the effectiveness of a marketing campaign, regardless of the amount of data; in contrast, data mining uses machine learning and statistical models to uncover clandestine or hidden patterns in a large volume of data.The related terms data dredging, data fishing, and data snooping refer to the use of data mining methods to sample parts of a larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered. These methods can, however, be used in creating new hypotheses to test against the larger data populations."}, {"text": "split Bregman are special instances of proximal algorithms. For the theory of proximal gradient methods from the perspective of and with applications to statistical learning theory, see proximal gradient methods for learning."}]}, {"question": "What data would be used as input to the machine learning algorithms", "positive_ctxs": [{"text": "We input the data in the learning algorithm as a set of inputs, which is called as Features, denoted by X along with the corresponding outputs, which is indicated by Y, and the algorithm learns by comparing its actual production with correct outputs to find errors. It then modifies the model accordingly."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the underlying framework used to represent knowledge? Semantic networks were one of the first knowledge representation primitives. Also, data structures and algorithms for general fast search."}, {"text": "Triplet loss is a loss function for machine learning algorithms where a baseline (anchor) input is compared to a positive (truthy) input and a negative (falsy) input. The distance from the baseline (anchor) input to the positive (truthy) input is minimized, and the distance from the baseline (anchor) input to the negative (falsy) input is maximized."}, {"text": "In a typical machine learning application, practitioners have a set of input data points to train on. The raw data may not be in a form that all algorithms can be applied to it. To make the data amenable for machine learning, an expert may have to apply appropriate data pre-processing, feature engineering, feature extraction, and feature selection methods."}, {"text": "In a typical machine learning application, practitioners have a set of input data points to train on. The raw data may not be in a form that all algorithms can be applied to it. To make the data amenable for machine learning, an expert may have to apply appropriate data pre-processing, feature engineering, feature extraction, and feature selection methods."}, {"text": "In a typical machine learning application, practitioners have a set of input data points to train on. The raw data may not be in a form that all algorithms can be applied to it. To make the data amenable for machine learning, an expert may have to apply appropriate data pre-processing, feature engineering, feature extraction, and feature selection methods."}, {"text": "An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task.Types of supervised learning algorithms include active learning, classification and regression. Classification algorithms are used when the outputs are restricted to a limited set of values, and regression algorithms are used when the outputs may have any numerical value within a range. As an example, for a classification algorithm that filters emails, the input would be an incoming email, and the output would be the name of the folder in which to file the email."}, {"text": "An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task.Types of supervised learning algorithms include active learning, classification and regression. Classification algorithms are used when the outputs are restricted to a limited set of values, and regression algorithms are used when the outputs may have any numerical value within a range. As an example, for a classification algorithm that filters emails, the input would be an incoming email, and the output would be the name of the folder in which to file the email."}]}, {"question": "What is Kappa used for", "positive_ctxs": [{"text": "Kappa is widely used on Twitch in chats to signal you are being sarcastic or ironic, are trolling, or otherwise playing around with someone. It is usually typed at the end of a string of text, but, as can often the case on Twitch, it is also often used on its own or repeatedly (to spam someone)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Here, reporting quantity and allocation disagreement is informative while Kappa obscures information. Furthermore, Kappa introduces some challenges in calculation and interpretation because Kappa is a ratio. It is possible for Kappa's ratio to return an undefined value due to zero in the denominator."}, {"text": "Here, reporting quantity and allocation disagreement is informative while Kappa obscures information. Furthermore, Kappa introduces some challenges in calculation and interpretation because Kappa is a ratio. It is possible for Kappa's ratio to return an undefined value due to zero in the denominator."}, {"text": "Here, reporting quantity and allocation disagreement is informative while Kappa obscures information. Furthermore, Kappa introduces some challenges in calculation and interpretation because Kappa is a ratio. It is possible for Kappa's ratio to return an undefined value due to zero in the denominator."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Kappa is also used to compare performance in machine learning, but the directional version known as Informedness or Youden's J statistic is argued to be more appropriate for supervised learning."}, {"text": "Kappa is also used to compare performance in machine learning, but the directional version known as Informedness or Youden's J statistic is argued to be more appropriate for supervised learning."}, {"text": "Kappa is also used to compare performance in machine learning, but the directional version known as Informedness or Youden's J statistic is argued to be more appropriate for supervised learning."}]}, {"question": "What does the Q table in Q learning algorithm represent", "positive_ctxs": [{"text": "When q-learning is performed we create what's called a q-table or matrix that follows the shape of [state, action] and we initialize our values to zero. We then update and store our q-values after an episode. This q-table becomes a reference table for our agent to select the best action based on the q-value."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The DeepMind system used a deep convolutional neural network, with layers of tiled convolutional filters to mimic the effects of receptive fields. Reinforcement learning is unstable or divergent when a nonlinear function approximator such as a neural network is used to represent Q. This instability comes from the correlations present in the sequence of observations, the fact that small updates to Q may significantly change the policy and the data distribution, and the correlations between Q and the target values."}, {"text": "The form shows that inference from P implies Q to the negation of Q implies the negation of P is a valid argument."}, {"text": "The matrix Q is the change of basis matrix of the similarity transformation. Essentially, the matrices A and \u039b represent the same linear transformation expressed in two different bases. The eigenvectors are used as the basis when representing the linear transformation as \u039b."}, {"text": "A field extension over the rationals Q can be thought of as a vector space over Q (by defining vector addition as field addition, defining scalar multiplication as field multiplication by elements of Q, and otherwise ignoring the field multiplication). The dimension (or degree) of the field extension Q(\u03b1) over Q depends on \u03b1. If \u03b1 satisfies some polynomial equation"}, {"text": "Specifically, If we consider an SDR model in which the overall population consists of Q clusters, each having K binary units, so that each coefficient is represented by a set of Q units, one per cluster. We can then consider the particular world state, X, whose coefficient's representation, R(X), is the set of Q units active at time t to have the maximal probability and the probabilities of all other states, Y, to correspond to the size of the intersection of R(Y) and R(X). Thus, R(X) simultaneously serves both as the representation of the particular state, X, and as a probability distribution over all states."}, {"text": "Given two points P and Q on C, let s(P,Q) be the arc length of the portion of the curve between P and Q and let d(P,Q) denote the length of the line segment from P to Q. The curvature of C at P is given by the limit"}, {"text": "Select a random subset Q of [n] containing m elements and a random permutation, and ask about the probability that all elements of Q lie on the same cycle. This is another average parameter. The function b(k) is equal to"}]}, {"question": "How do you determine if there are outliers in a data set", "positive_ctxs": [{"text": "A commonly used rule says that a data point is an outlier if it is more than 1.5 \u22c5 IQR 1.5\\cdot \\text{IQR} 1. 5\u22c5IQR1, point, 5, dot, start text, I, Q, R, end text above the third quartile or below the first quartile."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Given a set of data that contains information on medical patients your goal is to find correlation for a disease. Before you can start iterating through the data ensure that you have an understanding of the result, are you looking for patients who have the disease? Are there other diseases that can be the cause?"}, {"text": "Sometimes, a set of numbers might contain outliers (i.e., data values which are much lower or much higher than the others). Often, outliers are erroneous data caused by artifacts. In this case, one can use a truncated mean."}, {"text": "Sometimes, a set of numbers might contain outliers (i.e., data values which are much lower or much higher than the others). Often, outliers are erroneous data caused by artifacts. In this case, one can use a truncated mean."}, {"text": "Sometimes, a set of numbers might contain outliers (i.e., data values which are much lower or much higher than the others). Often, outliers are erroneous data caused by artifacts. In this case, one can use a truncated mean."}, {"text": "Sometimes, a set of numbers might contain outliers (i.e., data values which are much lower or much higher than the others). Often, outliers are erroneous data caused by artifacts. In this case, one can use a truncated mean."}, {"text": "Sometimes, a set of numbers might contain outliers (i.e., data values which are much lower or much higher than the others). Often, outliers are erroneous data caused by artifacts. In this case, one can use a truncated mean."}]}, {"question": "Why is it important to randomise participants in a study", "positive_ctxs": [{"text": "Randomization as a method of experimental control has been extensively used in human clinical trials and other biological experiments. It prevents the selection bias and insures against the accidental bias. It produces the comparable groups and eliminates the source of bias in treatment assignments."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Sample size determination is the act of choosing the number of observations or replicates to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences about a population from a sample. In practice, the sample size used in a study is usually determined based on the cost, time, or convenience of collecting the data, and the need for it to offer sufficient statistical power."}, {"text": "Sample size determination is the act of choosing the number of observations or replicates to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences about a population from a sample. In practice, the sample size used in a study is usually determined based on the cost, time, or convenience of collecting the data, and the need for it to offer sufficient statistical power."}, {"text": "A true experiment would, for example, randomly assign children to a scholarship, in order to control for all other variables. Quasi-experiments are commonly used in social sciences, public health, education, and policy analysis, especially when it is not practical or reasonable to randomize study participants to the treatment condition."}, {"text": "Randomized controlled trial: A method where the study population is divided randomly in order to mitigate the chances of self-selection by participants or bias by the study designers. Before the experiment begins, the testers will assign the members of the participant pool to their groups (control, intervention, parallel), using a randomization process such as the use of a random number generator. For example, in a study on the effects of exercise, the conclusions would be less valid if participants were given a choice if they wanted to belong to the control group which would not exercise or the intervention group which would be willing to take part in an exercise program."}, {"text": "An interesting fact is that the original wiki software was created in 1995, but it took at least another six years for large wiki-based collaborative projects to appear. Why did it take so long? One explanation is that the original wiki software lacked a selection operation and hence couldn't effectively support content evolution."}, {"text": "False positive conclusions, often resulting from the pressure to publish or the author's own confirmation bias, are an inherent hazard in many fields. A good way to prevent biases potentially leading to false positives in the data collection phase is to use a double-blind design. When a double-blind design is used, participants are randomly assigned to experimental groups but the researcher is unaware of what participants belong to which group."}, {"text": "Furthermore, 60.1% (56.1\u201364.1) of participants were classified to have mild atopic dermatitis while 28.9% (25.3\u201332.7) had moderate and 11% (8.6\u201313.7) had severe. The study confirmed that there is a high prevalence and disease burden of atopic dermatitis in the population."}]}, {"question": "What is the difference between at test and a paired t test", "positive_ctxs": [{"text": "Two-sample t-test is used when the data of two samples are statistically independent, while the paired t-test is used when data is in the form of matched pairs. To use the two-sample t-test, we need to assume that the data from both samples are normally distributed and they have the same variances."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The common example scenario for when a paired difference test is appropriate is when a single set of test subjects has something applied to them and the test is intended to check for an effect."}, {"text": "With two paired samples, we look at the distribution of the difference scores. In that case, s is the standard deviation of this distribution of difference scores. This creates the following relationship between the t-statistic to test for a difference in the means of the two groups and Cohen's d:"}, {"text": "The power of the test is the probability that the test will find a statistically significant difference between men and women, as a function of the size of the true difference between those two populations."}, {"text": "The Wilcoxon signed-rank test is a non-parametric statistical hypothesis test used to compare two related samples, matched samples, or repeated measurements on a single sample to assess whether their population mean ranks differ (i.e. it is a paired difference test). It can be used as an alternative to the paired Student's t-test (also known as \"t-test for matched pairs\" or \"t-test for dependent samples\") when the distribution of the difference between two samples' means cannot be assumed to be normally distributed."}, {"text": "A chi-squared test, also written as \u03c72 test, is a statistical hypothesis test that is valid to perform when the test statistic is chi-squared distributed under the null hypothesis, specifically Pearson's chi-squared test and variants thereof. Pearson's chi-squared test is used to determine whether there is a statistically significant difference between the expected frequencies and the observed frequencies in one or more categories of a contingency table."}, {"text": "A chi-squared test, also written as \u03c72 test, is a statistical hypothesis test that is valid to perform when the test statistic is chi-squared distributed under the null hypothesis, specifically Pearson's chi-squared test and variants thereof. Pearson's chi-squared test is used to determine whether there is a statistically significant difference between the expected frequencies and the observed frequencies in one or more categories of a contingency table."}, {"text": "Welch's t test assumes the least and is therefore the most commonly used test in a two-sample hypothesis test where the mean of a metric is to be optimized. While the mean of the variable to be optimized is the most common choice of estimator, others are regularly used."}]}, {"question": "How do you find the test statistic", "positive_ctxs": [{"text": "The formula to calculate the test statistic comparing two population means is, Z= ( x - y )/\u221a(\u03c3x2/n1 + \u03c3y2/n2). In order to calculate the statistic, we must calculate the sample means ( x and y ) and sample standard deviations (\u03c3x and \u03c3y) for each sample separately. n1 and n2 represent the two sample sizes."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "The likelihood-ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e. on what probability of Type I error is considered tolerable (Type I errors consist of the rejection of a null hypothesis that is true)."}, {"text": "As such, the test statistic follows a distribution determined by the function used to define that test statistic and the distribution of the input observational data."}, {"text": "Conditional logistic regression is more general than the CMH test as it can handle continuous variable and perform multivariate analysis. When the CMH test can be applied, the CMH test statistic and the score test statistic of the conditional logistic regression are identical."}, {"text": "Conditional logistic regression is more general than the CMH test as it can handle continuous variable and perform multivariate analysis. When the CMH test can be applied, the CMH test statistic and the score test statistic of the conditional logistic regression are identical."}]}, {"question": "Why is ReLu used in hidden layers", "positive_ctxs": [{"text": "One reason you should consider when using ReLUs is, that they can produce dead neurons. That means that under certain circumstances your network can produce regions in which the network won't update, and the output is always 0."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A deep Boltzmann machine has a sequence of layers of hidden units.There are only connections between adjacent hidden layers, as well as between visible units and hidden units in the first hidden layer. The energy function of the system adds layer interaction terms to the energy function of general restricted Boltzmann machine and is defined by"}, {"text": "An autoencoder is a feed-forward neural network which is trained to approximate the identity function. That is, it is trained to map from a vector of values to the same vector. When used for dimensionality reduction purposes, one of the hidden layers in the network is limited to contain only a small number of network units."}, {"text": "Batch normalization was initially proposed to mitigate internal covariate shift. During the training stage of networks, as the parameters of the preceding layers change, the distribution of inputs to the current layer changes accordingly, such that the current layer needs to constantly readjust to new distributions. This problem is especially severe for deep networks, because small changes in shallower hidden layers will be amplified as they propagate within the network, resulting in significant shift in deeper hidden layers."}, {"text": "In recent years, pseudo-rehearsal has re-gained in popularity thanks to the progress in the capabilities of deep generative models. When such deep generative models are used to generate the \"pseudo-data\" to be rehearsed, this method is typically referred to as generative replay. Such generative replay can effectively prevent catastrophic forgetting, especially when the replay is performed in the hidden layers rather than at the input level."}, {"text": "This is the case of undercomplete autoencoders. If the hidden layers are larger than (overcomplete autoencoders), or equal to, the input layer, or the hidden units are given enough capacity, an autoencoder can potentially learn the identity function and become useless. However, experimental results have shown that autoencoders might still learn useful features in these cases."}, {"text": "A convolutional neural network consists of an input layer, hidden layers and an output layer. In any feed-forward neural network, any middle layers are called hidden because their inputs and outputs are masked by the activation function and final convolution. In a convolutional neural network, the hidden layers include layers that perform convolutions."}, {"text": "A convolutional neural network consists of an input layer, hidden layers and an output layer. In any feed-forward neural network, any middle layers are called hidden because their inputs and outputs are masked by the activation function and final convolution. In a convolutional neural network, the hidden layers include layers that perform convolutions."}]}, {"question": "How do you interpret mean square error", "positive_ctxs": [{"text": "The mean squared error tells you how close a regression line is to a set of points. It does this by taking the distances from the points to the regression line (these distances are the \u201cerrors\u201d) and squaring them. The squaring is necessary to remove any negative signs. It also gives more weight to larger differences."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "The most common risk function used for Bayesian estimation is the mean square error (MSE), also called squared error risk. The MSE is defined by"}]}, {"question": "What is the meaning of learning to learn", "positive_ctxs": [{"text": "'Learning to learn' is the ability to pursue and persist in learning, to organise one's own learning, including through effective management of time and information, both individually and in groups."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In this way, an interpretation provides semantic meaning to the terms, the predicates, and formulas of the language. The study of the interpretations of formal languages is called formal semantics. What follows is a description of the standard or Tarskian semantics for first-order logic."}, {"text": "Finally, learning is seen to be heavily dependent on the mood of the learner, with learning being impaired if the learner is under stress or does not want to learn the language."}, {"text": "Unsupervised dictionary learning does not utilize data labels and exploits the structure underlying the data for optimizing dictionary elements. An example of unsupervised dictionary learning is sparse coding, which aims to learn basis functions (dictionary elements) for data representation from unlabeled input data. Sparse coding can be applied to learn overcomplete dictionaries, where the number of dictionary elements is larger than the dimension of the input data."}, {"text": "Unsupervised dictionary learning does not utilize data labels and exploits the structure underlying the data for optimizing dictionary elements. An example of unsupervised dictionary learning is sparse coding, which aims to learn basis functions (dictionary elements) for data representation from unlabeled input data. Sparse coding can be applied to learn overcomplete dictionaries, where the number of dictionary elements is larger than the dimension of the input data."}, {"text": "Unsupervised dictionary learning does not utilize data labels and exploits the structure underlying the data for optimizing dictionary elements. An example of unsupervised dictionary learning is sparse coding, which aims to learn basis functions (dictionary elements) for data representation from unlabeled input data. Sparse coding can be applied to learn overcomplete dictionaries, where the number of dictionary elements is larger than the dimension of the input data."}, {"text": "The goal of any supervised learning algorithm is to find a function that best maps a set of inputs to their correct output. The motivation for backpropagation is to train a multi-layered neural network such that it can learn the appropriate internal representations to allow it to learn any arbitrary mapping of input to output."}, {"text": "The goal of any supervised learning algorithm is to find a function that best maps a set of inputs to their correct output. The motivation for backpropagation is to train a multi-layered neural network such that it can learn the appropriate internal representations to allow it to learn any arbitrary mapping of input to output."}]}, {"question": "What is the difference between Backpropagation and gradient descent", "positive_ctxs": [{"text": "Back-propagation is the process of calculating the derivatives and gradient descent is the process of descending through the gradient, i.e. adjusting the parameters of the model to go down through the loss function."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "as the step size is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and"}, {"text": "as the step size is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and"}, {"text": "as the step size is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and"}, {"text": "as the step size is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and"}, {"text": "as the step size is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and"}, {"text": "as the step size is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and"}, {"text": "as the step size is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and"}]}, {"question": "How do you find the correlation between categorical variables", "positive_ctxs": [{"text": "To measure the relationship between numeric variable and categorical variable with > 2 levels you should use eta correlation (square root of the R2 of the multifactorial regression). If the categorical variable has 2 levels, point-biserial correlation is used (equivalent to the Pearson correlation)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Linear trends are also used to find associations between ordinal data and other categorical variables, normally in a contingency tables. A correlation r is found between the variables where r lies between -1 and 1. To test the trend, a test statistic:"}, {"text": "If we compute the Pearson correlation coefficient between variables X and Y, the result is approximately 0.970, while if we compute the partial correlation between X and Y, using the formula given above, we find a partial correlation of 0.919. The computations were done using R with the following code."}, {"text": "Given a set of data that contains information on medical patients your goal is to find correlation for a disease. Before you can start iterating through the data ensure that you have an understanding of the result, are you looking for patients who have the disease? Are there other diseases that can be the cause?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "The Spearman correlation between two variables is equal to the Pearson correlation between the rank values of those two variables; while Pearson's correlation assesses linear relationships, Spearman's correlation assesses monotonic relationships (whether linear or not). If there are no repeated data values, a perfect Spearman correlation of +1 or \u22121 occurs when each of the variables is a perfect monotone function of the other."}]}, {"question": "Can z score be used for non normal distribution", "positive_ctxs": [{"text": "A Z-score is a score which indicates how many standard deviations an observation is from the mean of the distribution. Z-scores tend to be used mainly in the context of the normal curve, and their interpretation based on the standard normal table. Non-normal distributions can also be transformed into sets of Z-scores."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "Suppose that we have a sample of 99 test scores with a mean of 100 and a standard deviation of 1. If we assume all 99 test scores are random observations from a normal distribution, then we predict there is a 1% chance that the 100th test score will be higher than 102.33 (that is, the mean plus 2.33 standard deviations), assuming that the 100th test score comes from the same distribution as the others. Parametric statistical methods are used to compute the 2.33 value above, given 99 independent observations from the same normal distribution."}, {"text": "Note that since U1 + U2 = n1n2, the mean n1n2/2 used in the normal approximation is the mean of the two values of U. Therefore, the absolute value of the z statistic calculated will be same whichever value of U is used."}]}, {"question": "What is the difference between a normal distribution and a standard normal distribution", "positive_ctxs": [{"text": "A normal distribution is determined by two parameters the mean and the variance. Now the standard normal distribution is a specific distribution with mean 0 and variance 1. This is the distribution that is used to construct tables of the normal distribution."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Because the test statistic (such as t) is asymptotically normally distributed, provided the sample size is sufficiently large, the distribution used for hypothesis testing may be approximated by a normal distribution. Testing hypotheses using a normal distribution is well understood and relatively easy. The simplest chi-square distribution is the square of a standard normal distribution."}, {"text": "Because the square of a standard normal distribution is the chi-square distribution with one degree of freedom, the probability of a result such as 1 heads in 10 trials can be approximated either by using the normal distribution directly, or the chi-square distribution for the normalised, squared difference between observed and expected value. However, many problems involve more than the two possible outcomes of a binomial, and instead require 3 or more categories, which leads to the multinomial distribution. Just as de Moivre and Laplace sought for and found the normal approximation to the binomial, Pearson sought for and found a degenerate multivariate normal approximation to the multinomial distribution (the numbers in each category add up to the total sample size, which is considered fixed)."}, {"text": "This formulation\u2014which is standard in discrete choice models\u2014makes clear the relationship between logistic regression (the \"logit model\") and the probit model, which uses an error variable distributed according to a standard normal distribution instead of a standard logistic distribution. Both the logistic and normal distributions are symmetric with a basic unimodal, \"bell curve\" shape. The only difference is that the logistic distribution has somewhat heavier tails, which means that it is less sensitive to outlying data (and hence somewhat more robust to model mis-specifications or erroneous data)."}, {"text": "This formulation\u2014which is standard in discrete choice models\u2014makes clear the relationship between logistic regression (the \"logit model\") and the probit model, which uses an error variable distributed according to a standard normal distribution instead of a standard logistic distribution. Both the logistic and normal distributions are symmetric with a basic unimodal, \"bell curve\" shape. The only difference is that the logistic distribution has somewhat heavier tails, which means that it is less sensitive to outlying data (and hence somewhat more robust to model mis-specifications or erroneous data)."}, {"text": "This formulation\u2014which is standard in discrete choice models\u2014makes clear the relationship between logistic regression (the \"logit model\") and the probit model, which uses an error variable distributed according to a standard normal distribution instead of a standard logistic distribution. Both the logistic and normal distributions are symmetric with a basic unimodal, \"bell curve\" shape. The only difference is that the logistic distribution has somewhat heavier tails, which means that it is less sensitive to outlying data (and hence somewhat more robust to model mis-specifications or erroneous data)."}, {"text": "One of the most popular application of cumulative distribution function is standard normal table, also called the unit normal table or Z table, is the value of cumulative distribution function of the normal distribution. It is very useful to use Z-table not only for probabilities below a value which is the original application of cumulative distribution function, but also above and/or between values on standard normal distribution, and it was further extended to any normal distribution."}, {"text": "The subscript 1 indicates that this particular chi-square distribution is constructed from only 1 standard normal distribution. A chi-square distribution constructed by squaring a single standard normal distribution is said to have 1 degree of freedom. Thus, as the sample size for a hypothesis test increases, the distribution of the test statistic approaches a normal distribution."}]}, {"question": "What does the standard error of the estimate represent", "positive_ctxs": [{"text": "The standard error of estimate, Se indicates approximately how much error you make when you use the predicted value for Y (on the least-squares line) instead of the actual value of Y."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, a poll's standard error (what is reported as the margin of error of the poll), is the expected standard deviation of the estimated mean if the same poll were to be conducted multiple times. Thus, the standard error estimates the standard deviation of an estimate, which itself measures how much the estimate depends on the particular sample that was taken from the population."}, {"text": "For example, a poll's standard error (what is reported as the margin of error of the poll), is the expected standard deviation of the estimated mean if the same poll were to be conducted multiple times. Thus, the standard error estimates the standard deviation of an estimate, which itself measures how much the estimate depends on the particular sample that was taken from the population."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}]}, {"question": "What is the exponential smoothing formula", "positive_ctxs": [{"text": "The component form of simple exponential smoothing is given by: Forecast equation^yt+h|t=\u2113tSmoothing equation\u2113t=\u03b1yt+(1\u2212\u03b1)\u2113t\u22121, Forecast equation y ^ t + h | t = \u2113 t Smoothing equation \u2113 t = \u03b1 y t + ( 1 \u2212 \u03b1 ) \u2113 t \u2212 1 , where \u2113t is the level (or the smoothed value) of the series at time t ."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For every exponential smoothing method we also need to choose the value for the smoothing parameters. For simple exponential smoothing, there is only one smoothing parameter (\u03b1), but for the methods that follow there is usually more than one smoothing parameter."}, {"text": "Simple exponential smoothing does not do well when there is a trend in the data, which is inconvenient. In such situations, several methods were devised under the name \"double exponential smoothing\" or \"second-order exponential smoothing,\" which is the recursive application of an exponential filter twice, thus being termed \"double exponential smoothing\". This nomenclature is similar to quadruple exponential smoothing, which also references its recursion depth."}, {"text": "They differ in that exponential smoothing takes into account all past data, whereas moving average only takes into account k past data points. Computationally speaking, they also differ in that moving average requires that the past k data points, or the data point at lag k + 1 plus the most recent forecast value, to be kept, whereas exponential smoothing only needs the most recent forecast value to be kept.In the signal processing literature, the use of non-causal (symmetric) filters is commonplace, and the exponential window function is broadly used in this fashion, but a different terminology is used: exponential smoothing is equivalent to a first-order infinite-impulse response (IIR) filter and moving average is equivalent to a finite impulse response filter with equal weighting factors."}, {"text": "Exponential smoothing is a rule of thumb technique for smoothing time series data using the exponential window function. Whereas in the simple moving average the past observations are weighted equally, exponential functions are used to assign exponentially decreasing weights over time. It is an easily learned and easily applied procedure for making some determination based on prior assumptions by the user, such as seasonality."}, {"text": "There are cases where the smoothing parameters may be chosen in a subjective manner \u2013 the forecaster specifies the value of the smoothing parameters based on previous experience. However, a more robust and objective way to obtain values for the unknown parameters included in any exponential smoothing method is to estimate them from the observed data."}, {"text": "The basic idea behind double exponential smoothing is to introduce a term to take into account the possibility of a series exhibiting some form of trend. This slope component is itself updated via exponential smoothing."}, {"text": "The use of the exponential window function is first attributed to Poisson as an extension of a numerical analysis technique from the 17th century, and later adopted by the signal processing community in the 1940s. Here, exponential smoothing is the application of the exponential, or Poisson, window function. Exponential smoothing was first suggested in the statistical literature without citation to previous work by Robert Goodell Brown in 1956, and then expanded by Charles C. Holt in 1957."}]}, {"question": "Which type of hierarchical clustering algorithm is more commonly used", "positive_ctxs": [{"text": "The Agglomerative Hierarchical Clustering is the most common type of hierarchical clustering used to group objects in clusters based on their similarity."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "This definition of the Laplacian is commonly used in numerical analysis and in image processing. In image processing, it is considered to be a type of digital filter, more specifically an edge filter, called the Laplace filter."}]}, {"question": "What is reasoning in artificial intelligence", "positive_ctxs": [{"text": "The reasoning is the mental process of deriving logical conclusion and making predictions from available knowledge, facts, and beliefs. In artificial intelligence, the reasoning is essential so that the machine can also think rationally as a human brain, and can perform like a human."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A state space is the set of all possible configurations of a system. It is a useful abstraction for reasoning about the behavior of a given system and is widely used in the fields of artificial intelligence and game theory."}, {"text": "Learning a grammar (visual or linguistic) from training data would be equivalent to restricting the system to commonsense reasoning that operates on concepts in terms of grammatical production rules and is a basic goal of both human language acquisition and artificial intelligence (AI)."}, {"text": "Learning a grammar (visual or linguistic) from training data would be equivalent to restricting the system to commonsense reasoning that operates on concepts in terms of grammatical production rules and is a basic goal of both human language acquisition and artificial intelligence (AI)."}, {"text": "Learning a grammar (visual or linguistic) from training data would be equivalent to restricting the system to commonsense reasoning that operates on concepts in terms of grammatical production rules and is a basic goal of both human language acquisition and artificial intelligence (AI)."}, {"text": "Learning a grammar (visual or linguistic) from training data would be equivalent to restricting the system to commonsense reasoning that operates on concepts in terms of grammatical production rules and is a basic goal of both human language acquisition and artificial intelligence (AI)."}, {"text": "Learning a grammar (visual or linguistic) from training data would be equivalent to restricting the system to commonsense reasoning that operates on concepts in terms of grammatical production rules and is a basic goal of both human language acquisition and artificial intelligence (AI)."}, {"text": "Learning a grammar (visual or linguistic) from training data would be equivalent to restricting the system to commonsense reasoning that operates on concepts in terms of grammatical production rules and is a basic goal of both human language acquisition and artificial intelligence (AI)."}]}, {"question": "What are the uses of eigenvalues", "positive_ctxs": [{"text": "The eigenvalues and eigenvectors of a matrix are often used in the analysis of financial data and are integral in extracting useful information from the raw data. They can be used for predicting stock prices and analyzing correlations between various stocks, corresponding to different companies."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues of A. These eigenvalues correspond to the eigenvectors"}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "The eigenvalues represent the distribution of the source data's energy among each of the eigenvectors, where the eigenvectors form a basis for the data. The cumulative energy content g for the jth eigenvector is the sum of the energy content across all of the eigenvalues from 1 through j:"}, {"text": "The eigenvalues represent the distribution of the source data's energy among each of the eigenvectors, where the eigenvectors form a basis for the data. The cumulative energy content g for the jth eigenvector is the sum of the energy content across all of the eigenvalues from 1 through j:"}, {"text": "The eigenvalues represent the distribution of the source data's energy among each of the eigenvectors, where the eigenvectors form a basis for the data. The cumulative energy content g for the jth eigenvector is the sum of the energy content across all of the eigenvalues from 1 through j:"}, {"text": "The eigenvalues represent the distribution of the source data's energy among each of the eigenvectors, where the eigenvectors form a basis for the data. The cumulative energy content g for the jth eigenvector is the sum of the energy content across all of the eigenvalues from 1 through j:"}]}, {"question": "What is learning rate in CNN", "positive_ctxs": [{"text": "The learning rate is a hyperparameter that controls how much to change the model in response to the estimated error each time the model weights are updated. The learning rate may be the most important hyperparameter when configuring your neural network."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}]}, {"question": "How can frequency resolution be improved", "positive_ctxs": [{"text": "The most intuitive way to increase the frequency resolution of an FFT is to increase the size while keeping the sampling frequency constant. Doing this will increase the number of frequency bins that are created, decreasing the frequency difference between each."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Meeting a frequency response requirement with an FIR filter uses relatively straightforward procedures. In the most basic form, the desired frequency response itself can be sampled with a resolution of"}, {"text": "The frequency of accidents on a road fell after a speed camera was installed. Therefore, the speed camera has improved road safety."}, {"text": "The spatial resolution of FTIR can be further improved below the micrometer scale by integrating it into scanning near-field optical microscopy platform. The corresponding technique is called nano-FTIR and allows for performing broadband spectroscopy on materials in ultra-small quantities (single viruses and protein complexes) and with 10 to 20 nm spatial resolution."}, {"text": "How increased sample size translates to higher power is a measure of the efficiency of the test \u2014 for example, the sample size required for a given power.The precision with which the data are measured also influences statistical power. Consequently, power can often be improved by reducing the measurement error in the data. A related concept is to improve the \"reliability\" of the measure being assessed (as in psychometric reliability)."}, {"text": "Thus a 4 cm\u22121 resolution will be obtained if the maximal retardation is 0.25 cm; this is typical of the cheaper FTIR instruments. Much higher resolution can be obtained by increasing the maximal retardation. This is not easy, as the moving mirror must travel in a near-perfect straight line."}, {"text": "There are many examples of conflict resolution in history, and there has been a debate about the ways to conflict resolution: whether it should be forced or peaceful. Conflict resolution by peaceful means is generally perceived to be a better option. The conflict resolution curve derived from an analytical model that offers a peaceful solution by motivating conflicting entities."}, {"text": "The resolution method works only with formulas that are disjunctions of atomic formulas; arbitrary formulas must first be converted to this form through Skolemization. The resolution rule states that from the hypotheses"}]}, {"question": "What is linear by linear association chi square test", "positive_ctxs": [{"text": "The \"Linear-by-Linear\" test is for ordinal (ordered) categories and assumes equal and ordered intervals. The Linear-by-Linear Association test is a test for trends in a larger-than-2x2 table. Its value is shown to be significant and indicates that income tends to rise with values of \"male\" (i.e., from 0 to 1)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Testing for the equality of two or more variances is difficult. The F test and chi square tests are both adversely affected by non-normality and are not recommended for this purpose."}, {"text": "Testing for the equality of two or more variances is difficult. The F test and chi square tests are both adversely affected by non-normality and are not recommended for this purpose."}, {"text": "Testing for the equality of two or more variances is difficult. The F test and chi square tests are both adversely affected by non-normality and are not recommended for this purpose."}, {"text": "Collinearity is a linear association between two explanatory variables. Two variables are perfectly collinear if there is an exact linear relationship between them."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}]}, {"question": "Can you use continuous variables in logistic regression", "positive_ctxs": [{"text": "In logistic regression, as with any flavour of regression, it is fine, indeed usually better, to have continuous predictors. Given a choice between a continuous variable as a predictor and categorising a continuous variable for predictors, the first is usually to be preferred."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Like other forms of regression analysis, logistic regression makes use of one or more predictor variables that may be either continuous or categorical. Unlike ordinary linear regression, however, logistic regression is used for predicting dependent variables that take membership in one of a limited number of categories (treating the dependent variable in the binomial case as the outcome of a Bernoulli trial) rather than a continuous outcome. Given this difference, the assumptions of linear regression are violated."}, {"text": "Like other forms of regression analysis, logistic regression makes use of one or more predictor variables that may be either continuous or categorical. Unlike ordinary linear regression, however, logistic regression is used for predicting dependent variables that take membership in one of a limited number of categories (treating the dependent variable in the binomial case as the outcome of a Bernoulli trial) rather than a continuous outcome. Given this difference, the assumptions of linear regression are violated."}, {"text": "Like other forms of regression analysis, logistic regression makes use of one or more predictor variables that may be either continuous or categorical. Unlike ordinary linear regression, however, logistic regression is used for predicting dependent variables that take membership in one of a limited number of categories (treating the dependent variable in the binomial case as the outcome of a Bernoulli trial) rather than a continuous outcome. Given this difference, the assumptions of linear regression are violated."}, {"text": "Categorical variables represent a qualitative method of scoring data (i.e. represents categories or group membership). These can be included as independent variables in a regression analysis or as dependent variables in logistic regression or probit regression, but must be converted to quantitative data in order to be able to analyze the data."}, {"text": "Categorical variables represent a qualitative method of scoring data (i.e. represents categories or group membership). These can be included as independent variables in a regression analysis or as dependent variables in logistic regression or probit regression, but must be converted to quantitative data in order to be able to analyze the data."}, {"text": "Categorical variables represent a qualitative method of scoring data (i.e. represents categories or group membership). These can be included as independent variables in a regression analysis or as dependent variables in logistic regression or probit regression, but must be converted to quantitative data in order to be able to analyze the data."}, {"text": "where f(X) is an analytic function in X. With this choice, the single-layer neural network is identical to the logistic regression model. This function has a continuous derivative, which allows it to be used in backpropagation."}]}, {"question": "How do you do multidimensional scaling in SPSS", "positive_ctxs": [{"text": "From the menus of SPSS choose: Analyze Scale Multidimensional Scaling\u2026 In Distances, select either Data are distances or Create distances from data. If your data are distances, you must select at least four numeric variables for analysis, and you can click Shape to indicate the shape of the distance matrix."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}, {"text": "SPSS is a widely used program for statistical analysis in social science. It is also used by market researchers, health researchers, survey companies, government, education researchers, marketing organizations, data miners, and others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as one of \"sociology's most influential books\" for allowing ordinary researchers to do their own statistical analysis."}]}, {"question": "What is local minima in machine learning", "positive_ctxs": [{"text": "A local minimum of a function (typically a cost function in machine learning, which is something we want to minimize based on empirical data) is a point in the domain of a function that has the following property: the function evaluates to a greater value at every other point in a neighborhood around the local minimum"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}]}, {"question": "What are the types of classification in statistics", "positive_ctxs": [{"text": "There are four types of classification. They are Geographical classification, Chronological classification, Qualitative classification, Quantitative classification."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Multivariate statistics concerns understanding the different aims and background of each of the different forms of multivariate analysis, and how they relate to each other. The practical application of multivariate statistics to a particular problem may involve several types of univariate and multivariate analyses in order to understand the relationships between variables and their relevance to the problem being studied."}]}, {"question": "What is dataset in TensorFlow", "positive_ctxs": [{"text": "TensorFlow Datasets is a collection of datasets ready to use, with TensorFlow or other Python ML frameworks, such as Jax. All datasets are exposed as tf. data. Datasets , enabling easy-to-use and high-performance input pipelines. To get started see the guide and our list of datasets."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "A balanced panel (e.g., the first dataset above) is a dataset in which each panel member (i.e., person) is observed every year. Consequently, if a balanced panel contains N panel members and T periods, the number of observations (n) in the dataset is necessarily n = N\u00d7T."}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "In March 2018, Google announced TensorFlow.js version 1.0 for machine learning in JavaScript.In Jan 2019, Google announced TensorFlow 2.0. It became officially available in Sep 2019.In May 2019, Google announced TensorFlow Graphics for deep learning in computer graphics."}, {"text": "In March 2018, Google announced TensorFlow.js version 1.0 for machine learning in JavaScript.In Jan 2019, Google announced TensorFlow 2.0. It became officially available in Sep 2019.In May 2019, Google announced TensorFlow Graphics for deep learning in computer graphics."}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}, {"text": "Transformers is a library produced by Hugging Face which supplies Transformer-based architectures and pretrained models. The library is free software and available on GitHub. Its models are available both in PyTorch and TensorFlow format."}]}, {"question": "What are eigenvectors of a matrix", "positive_ctxs": [{"text": "Eigenvectors are a special set of vectors associated with a linear system of equations (i.e., a matrix equation) that are sometimes also known as characteristic vectors, proper vectors, or latent vectors (Marcus and Minc 1988, p. Each eigenvector is paired with a corresponding so-called eigenvalue."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A matrix that is not diagonalizable is said to be defective. For defective matrices, the notion of eigenvectors generalizes to generalized eigenvectors and the diagonal matrix of eigenvalues generalizes to the Jordan normal form. Over an algebraically closed field, any matrix A has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition into generalized eigenspaces."}, {"text": "Suppose the eigenvectors of A form a basis, or equivalently A has n linearly independent eigenvectors v1, v2, ..., vn with associated eigenvalues \u03bb1, \u03bb2, ..., \u03bbn. The eigenvalues need not be distinct. Define a square matrix Q whose columns are the n linearly independent eigenvectors of A,"}, {"text": "Matrix V denotes the matrix of right eigenvectors (as opposed to left eigenvectors). In general, the matrix of right eigenvectors need not be the (conjugate) transpose of the matrix of left eigenvectors."}, {"text": "Matrix V denotes the matrix of right eigenvectors (as opposed to left eigenvectors). In general, the matrix of right eigenvectors need not be the (conjugate) transpose of the matrix of left eigenvectors."}, {"text": "Matrix V denotes the matrix of right eigenvectors (as opposed to left eigenvectors). In general, the matrix of right eigenvectors need not be the (conjugate) transpose of the matrix of left eigenvectors."}, {"text": "Matrix V denotes the matrix of right eigenvectors (as opposed to left eigenvectors). In general, the matrix of right eigenvectors need not be the (conjugate) transpose of the matrix of left eigenvectors."}, {"text": "The main linear technique for dimensionality reduction, principal component analysis, performs a linear mapping of the data to a lower-dimensional space in such a way that the variance of the data in the low-dimensional representation is maximized. In practice, the covariance (and sometimes the correlation) matrix of the data is constructed and the eigenvectors on this matrix are computed. The eigenvectors that correspond to the largest eigenvalues (the principal components) can now be used to reconstruct a large fraction of the variance of the original data."}]}, {"question": "What mean standard deviation", "positive_ctxs": [{"text": "The standard deviation is a statistic that measures the dispersion of a dataset relative to its mean and is calculated as the square root of the variance. If the data points are further from the mean, there is a higher deviation within the data set; thus, the more spread out the data, the higher the standard deviation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "has a mean, but not a standard deviation (loosely speaking, the standard deviation is infinite). The Cauchy distribution has neither a mean nor a standard deviation."}, {"text": "has a mean, but not a standard deviation (loosely speaking, the standard deviation is infinite). The Cauchy distribution has neither a mean nor a standard deviation."}, {"text": "Often, we want some information about the precision of the mean we obtained. We can obtain this by determining the standard deviation of the sampled mean. Assuming statistical independence of the values in the sample, the standard deviation of the mean is related to the standard deviation of the distribution by:"}, {"text": "Often, we want some information about the precision of the mean we obtained. We can obtain this by determining the standard deviation of the sampled mean. Assuming statistical independence of the values in the sample, the standard deviation of the mean is related to the standard deviation of the distribution by:"}, {"text": "In scientific and technical literature, experimental data are often summarized either using the mean and standard deviation of the sample data or the mean with the standard error. This often leads to confusion about their interchangeability. However, the mean and standard deviation are descriptive statistics, whereas the standard error of the mean is descriptive of the random sampling process."}, {"text": "In scientific and technical literature, experimental data are often summarized either using the mean and standard deviation of the sample data or the mean with the standard error. This often leads to confusion about their interchangeability. However, the mean and standard deviation are descriptive statistics, whereas the standard error of the mean is descriptive of the random sampling process."}, {"text": "In scientific and technical literature, experimental data are often summarized either using the mean and standard deviation of the sample data or the mean with the standard error. This often leads to confusion about their interchangeability. However, the mean and standard deviation are descriptive statistics, whereas the standard error of the mean is descriptive of the random sampling process."}]}, {"question": "What is statistical design of experiments", "positive_ctxs": [{"text": "The (statistical) design of experiments (DOE) is an efficient procedure for planning experiments so that the data obtained can be analyzed to yield valid and objective conclusions. DOE begins with determining the objectives of an experiment and selecting the process factors for the study."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In science, randomized experiments are the experiments that allow the greatest reliability and validity of statistical estimates of treatment effects. Randomization-based inference is especially important in experimental design and in survey sampling."}, {"text": "In science, randomized experiments are the experiments that allow the greatest reliability and validity of statistical estimates of treatment effects. Randomization-based inference is especially important in experimental design and in survey sampling."}, {"text": "In science, randomized experiments are the experiments that allow the greatest reliability and validity of statistical estimates of treatment effects. Randomization-based inference is especially important in experimental design and in survey sampling."}, {"text": "Model selection is the task of selecting a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the data collected is well-suited to the problem of model selection."}, {"text": "The design of experiments (DOE, DOX, or experimental design) is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation."}, {"text": "Correctly designed experiments advance knowledge in the natural and social sciences and engineering. Other applications include marketing and policy making. The study of the design of experiments is an important topic in metascience."}, {"text": "Sequential analysis was pioneered by Abraham Wald. In 1972, Herman Chernoff wrote an overview of optimal sequential designs, while adaptive designs were surveyed later by S. Zacks. Of course, much work on the optimal design of experiments is related to the theory of optimal decisions, especially the statistical decision theory of Abraham Wald."}]}, {"question": "How do you predict using machine learning", "positive_ctxs": [{"text": "With the LassoCV, RidgeCV, and Linear Regression machine learning algorithms.Define the problem.Gather the data.Clean & Explore the data.Model the data.Evaluate the model.Answer the problem."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "The following question was posed to Jeff Hawkins in September 2011 with regard to cortical learning algorithms: \"How do you know if the changes you are making to the model are good or not?\" To which Jeff's response was \"There are two categories for the answer: one is to look at neuroscience, and the other is methods for machine intelligence. In the neuroscience realm, there are many predictions that we can make, and those can be tested."}, {"text": "Suppose, for example, you have a very imbalanced validation set made of 100 elements, 95 of which are positive elements, and only 5 are negative elements (as explained in Tip 5). And suppose also you made some mistakes in designing and training your machine learning classifier, and now you have an algorithm which always predicts positive. Imagine that you are not aware of this issue."}, {"text": "These values lead to the following performance scores: accuracy = 95%, and F1 score = 97.44%. By reading these over-optimistic scores, then you will be very happy and will think that your machine learning algorithm is doing an excellent job. Obviously, you would be on the wrong track."}, {"text": "\"Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday \u2013 and thus be able to change, say, Saturday's model before Saturday arrives."}, {"text": "\"Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday \u2013 and thus be able to change, say, Saturday's model before Saturday arrives."}]}, {"question": "What is an example of bootstrapping", "positive_ctxs": [{"text": "Bootstrapping is a type of resampling where large numbers of smaller samples of the same size are repeatedly drawn, with replacement, from a single original sample. For example, let's say your sample was made up of ten numbers: 49, 34, 21, 18, 10, 8, 6, 5, 2, 1. You randomly draw three numbers 5, 1, and 49."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "A type of computer simulation called discrete-event simulation represents the operation of a system as a chronological sequence of events. A technique called bootstrapping the simulation model is used, which bootstraps initial data points using a pseudorandom number generator to schedule an initial set of pending events, which schedule additional events, and with time, the distribution of event times approaches its steady state\u2014the bootstrapping behavior is overwhelmed by steady-state behavior."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "When the theoretical distribution of a statistic of interest is complicated or unknown. Since the bootstrapping procedure is distribution-independent it provides an indirect method to assess the properties of the distribution underlying the sample and the parameters of interest that are derived from this distribution.When the sample size is insufficient for straightforward statistical inference. If the underlying distribution is well-known, bootstrapping provides a way to account for the distortions caused by the specific sample that may not be fully representative of the population.When power calculations have to be performed, and a small pilot sample is available."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is more there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.A simple example of modus ponens often used in introductory logic books is \"If you are human then you are mortal\". This can be represented in pseudocode as:"}]}, {"question": "What is the difference between mean and standard deviation", "positive_ctxs": [{"text": "The standard deviation (SD) measures the amount of variability, or dispersion, from the individual data values to the mean, while the standard error of the mean (SEM) measures how far the sample mean of the data is likely to be from the true population mean. The SEM is always smaller than the SD."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "In scientific and technical literature, experimental data are often summarized either using the mean and standard deviation of the sample data or the mean with the standard error. This often leads to confusion about their interchangeability. However, the mean and standard deviation are descriptive statistics, whereas the standard error of the mean is descriptive of the random sampling process."}, {"text": "In scientific and technical literature, experimental data are often summarized either using the mean and standard deviation of the sample data or the mean with the standard error. This often leads to confusion about their interchangeability. However, the mean and standard deviation are descriptive statistics, whereas the standard error of the mean is descriptive of the random sampling process."}, {"text": "In scientific and technical literature, experimental data are often summarized either using the mean and standard deviation of the sample data or the mean with the standard error. This often leads to confusion about their interchangeability. However, the mean and standard deviation are descriptive statistics, whereas the standard error of the mean is descriptive of the random sampling process."}, {"text": "The 2-norm and \u221e-norm are strictly convex, and thus (by convex optimization) the minimizer is unique (if it exists), and exists for bounded distributions. Thus standard deviation about the mean is lower than standard deviation about any other point, and the maximum deviation about the midrange is lower than the maximum deviation about any other point."}]}, {"question": "What is the purpose of sampling frame", "positive_ctxs": [{"text": "A simple definition of a sampling frame is the set of source materials from which the sample is selected. The definition also encompasses the purpose of sampling frames, which is to provide a means for choosing the particular members of the target population that are to be interviewed in the survey."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, a sampling frame is the source material or device from which a sample is drawn. It is a list of all those within a population who can be sampled, and may include individuals, households or institutions.Importance of the sampling frame is stressed by Jessen and Salant and Dillman."}, {"text": "In a simple random sample (SRS) of a given size, all subsets of a sampling frame have an equal probability of being selected. Each element of the frame thus has an equal probability of selection: the frame is not subdivided or partitioned. Furthermore, any given pair of elements has the same chance of selection as any other such pair (and similarly for triples, and so on)."}, {"text": "In a simple random sample (SRS) of a given size, all subsets of a sampling frame have an equal probability of being selected. Each element of the frame thus has an equal probability of selection: the frame is not subdivided or partitioned. Furthermore, any given pair of elements has the same chance of selection as any other such pair (and similarly for triples, and so on)."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "As a remedy, we seek a sampling frame which has the property that we can identify every single element and include any in our sample. The most straightforward type of frame is a list of elements of the population (preferably the entire population) with appropriate contact information. For example, in an opinion poll, possible sampling frames include an electoral register and a telephone directory."}, {"text": "As a remedy, we seek a sampling frame which has the property that we can identify every single element and include any in our sample. The most straightforward type of frame is a list of elements of the population (preferably the entire population) with appropriate contact information. For example, in an opinion poll, possible sampling frames include an electoral register and a telephone directory."}, {"text": "Conceptually, simple random sampling is the simplest of the probability sampling techniques. It requires a complete sampling frame, which may not be available or feasible to construct for large populations. Even if a complete frame is available, more efficient approaches may be possible if other useful information is available about the units in the population."}]}, {"question": "What is an example of a statistic in the study", "positive_ctxs": [{"text": "A statistic is a characteristic of a sample. Generally, a statistic is used to estimate the value of a population parameter. For instance, suppose we selected a random sample of 100 students from a school with 1000 students. The average height of the sampled students would be an example of a statistic."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In this way, an interpretation provides semantic meaning to the terms, the predicates, and formulas of the language. The study of the interpretations of formal languages is called formal semantics. What follows is a description of the standard or Tarskian semantics for first-order logic."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "Therefore, sleeping with the light on causes myopia.This is a scientific example that resulted from a study at the University of Pennsylvania Medical Center. Published in the May 13, 1999 issue of Nature, the study received much coverage at the time in the popular press. However, a later study at Ohio State University did not find that infants sleeping with the light on caused the development of myopia."}, {"text": "Therefore, sleeping with the light on causes myopia.This is a scientific example that resulted from a study at the University of Pennsylvania Medical Center. Published in the May 13, 1999 issue of Nature, the study received much coverage at the time in the popular press. However, a later study at Ohio State University did not find that infants sleeping with the light on caused the development of myopia."}, {"text": "is a test statistic, rather than any of the actual observations. A test statistic is the output of a scalar function of all the observations. This statistic provides a single number, such as the average or the correlation coefficient, that summarizes the characteristics of the data, in a way relevant to a particular inquiry."}, {"text": "What is the period of oscillation T of a mass m attached to an ideal linear spring with spring constant k suspended in gravity of strength g? That period is the solution for T of some dimensionless equation in the variables T, m, k, and g."}]}, {"question": "What is a random variable in probability theory", "positive_ctxs": [{"text": "A random variable is a numerical description of the outcome of a statistical experiment. For a discrete random variable, x, the probability distribution is defined by a probability mass function, denoted by f(x). This function provides the probability for each value of the random variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "is countable, the random variable is called a discrete random variable and its distribution is a discrete probability distribution, i.e. can be described by a probability mass function that assigns a probability to each value in the image of"}, {"text": "is countable, the random variable is called a discrete random variable and its distribution is a discrete probability distribution, i.e. can be described by a probability mass function that assigns a probability to each value in the image of"}, {"text": "In other words, the probability that a random variable assumes a value depends on its immediate neighboring random variables. The probability of a random variable in an MRF is given by"}, {"text": "In probability and statistics, a random variable, random quantity, aleatory variable, or stochastic variable is described informally as a variable whose values depend on outcomes of a random phenomenon. The formal mathematical treatment of random variables is a topic in probability theory. In that context, a random variable is understood as a measurable function defined on a probability space that maps from the sample space to the real numbers."}, {"text": "In probability and statistics, a random variable, random quantity, aleatory variable, or stochastic variable is described informally as a variable whose values depend on outcomes of a random phenomenon. The formal mathematical treatment of random variables is a topic in probability theory. In that context, a random variable is understood as a measurable function defined on a probability space that maps from the sample space to the real numbers."}, {"text": "If more than one random variable is defined in a random experiment, it is important to distinguish between the joint probability distribution of X and Y and the probability distribution of each variable individually. The individual probability distribution of a random variable is referred to as its marginal probability distribution. In general, the marginal probability distribution of X can be determined from the joint probability distribution of X and other random variables."}, {"text": "In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the \"amount of information\" (in units such as shannons, commonly called bits) obtained about one random variable through observing the other random variable. The concept of mutual information is intimately linked to that of entropy of a random variable, a fundamental notion in information theory that quantifies the expected \"amount of information\" held in a random variable."}]}, {"question": "How do you find the Poisson distribution", "positive_ctxs": [{"text": "(When does a random variable have a Poisson YouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "The Poisson distribution is a special case of the discrete compound Poisson distribution (or stuttering Poisson distribution) with only a parameter. The discrete compound Poisson distribution can be deduced from the limiting distribution of univariate multinomial distribution. It is also a special case of a compound Poisson distribution."}, {"text": "All of the cumulants of the Poisson distribution are equal to the expected value \u03bb. The nth factorial moment of the Poisson distribution is \u03bbn."}, {"text": "In other words, the alternatively parameterized negative binomial distribution converges to the Poisson distribution and r controls the deviation from the Poisson. This makes the negative binomial distribution suitable as a robust alternative to the Poisson, which approaches the Poisson for large r, but which has larger variance than the Poisson for small r."}, {"text": "In other words, the alternatively parameterized negative binomial distribution converges to the Poisson distribution and r controls the deviation from the Poisson. This makes the negative binomial distribution suitable as a robust alternative to the Poisson, which approaches the Poisson for large r, but which has larger variance than the Poisson for small r."}]}, {"question": "How the Bayesian network can be used", "positive_ctxs": [{"text": "Bayesian networks are a type of Probabilistic Graphical Model that can be used to build models from data and/or expert opinion. They can be used for a wide range of tasks including prediction, anomaly detection, diagnostics, automated insight, reasoning, time series prediction and decision making under uncertainty."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A Markov network or MRF is similar to a Bayesian network in its representation of dependencies; the differences being that Bayesian networks are directed and acyclic, whereas Markov networks are undirected and may be cyclic. Thus, a Markov network can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies); on the other hand, it can't represent certain dependencies that a Bayesian network can (such as induced dependencies). The underlying graph of a Markov random field may be finite or infinite."}, {"text": "A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases."}, {"text": "A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases."}, {"text": "A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases."}, {"text": "A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases."}, {"text": "A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases."}, {"text": "A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases."}]}, {"question": "What is a normal distribution in a histogram", "positive_ctxs": [{"text": "A common pattern is the bell-shaped curve known as the \"normal distribution.\" In a normal or \"typical\" distribution, points are as likely to occur on one side of the average as on the other. Note that other distributions look similar to the normal distribution."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "An informal approach to testing normality is to compare a histogram of the sample data to a normal probability curve. The empirical distribution of the data (the histogram) should be bell-shaped and resemble the normal distribution. This might be difficult to see if the sample is small."}, {"text": "An informal approach to testing normality is to compare a histogram of the sample data to a normal probability curve. The empirical distribution of the data (the histogram) should be bell-shaped and resemble the normal distribution. This might be difficult to see if the sample is small."}, {"text": "A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram Mi of a histogram mj is defined as:"}, {"text": "A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram Mi of a histogram mj is defined as:"}, {"text": "An alternative parametric approach is to assume that the residuals follow a mixture of normal distributions (Daemi et al. 2019); in particular, a contaminated normal distribution in which the majority of observations are from a specified normal distribution, but a small proportion are from a normal distribution with much higher variance. That is, residuals have probability"}, {"text": "The VEGAS algorithm approximates the exact distribution by making a number of passes over the integration region which creates the histogram of the function f. Each histogram is used to define a sampling distribution for the next pass. Asymptotically this procedure converges to the desired distribution. In order to avoid the number of histogram bins growing like Kd, the probability distribution is approximated by a separable function:"}, {"text": "has a joint normal distribution. A simple example is one in which X has a normal distribution with expected value 0 and variance 1, and"}]}, {"question": "Which algorithm is best for classification", "positive_ctxs": [{"text": "3.1 Comparison MatrixClassification AlgorithmsAccuracyF1-ScoreNa\u00efve Bayes80.11%0.6005Stochastic Gradient Descent82.20%0.5780K-Nearest Neighbours83.56%0.5924Decision Tree84.23%0.63083 more rows\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Newton's method requires the 2nd order derivatives, so for each iteration, the number of function calls is in the order of N\u00b2, but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself."}, {"text": "Similarly, given a median-selection algorithm or general selection algorithm applied to find the median, one can use it as a pivot strategy in Quicksort, obtaining a sorting algorithm. If the selection algorithm is optimal, meaning O(n), then the resulting sorting algorithm is optimal, meaning O(n log n). The median is the best pivot for sorting, as it evenly divides the data, and thus guarantees optimal sorting, assuming the selection algorithm is optimal."}, {"text": "will generally be small but not necessarily zero. Which of these regimes is more relevant depends on the specific data set at hand."}, {"text": "In the undirected case, the greedy tour is at most O(ln n)-times longer than an optimal tour. The best lower bound known for any deterministic online algorithm is 2.5 \u2212 \u03b5;"}, {"text": "Which treatment is considered better is determined by an inequality between two ratios (successes/total). The reversal of the inequality between the ratios, which creates Simpson's paradox, happens because two effects occur together:"}, {"text": "It is important that the remote sensor chooses a classification method that works best with the number of classifications used while providing the least amount of error."}, {"text": "In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric classification method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. In both cases, the input consists of the k closest training examples in data set."}]}, {"question": "Which is better supervised or unsupervised learning", "positive_ctxs": [{"text": "In a supervised learning model, the algorithm learns on a labeled dataset, providing an answer key that the algorithm can use to evaluate its accuracy on training data. An unsupervised model, in contrast, provides unlabeled data that the algorithm tries to make sense of by extracting features and patterns on its own."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "The goals of learning are understanding and prediction. Learning falls into many categories, including supervised learning, unsupervised learning, online learning, and reinforcement learning. From the perspective of statistical learning theory, supervised learning is best understood."}, {"text": "Unsupervised feature learning is learning features from unlabeled data. The goal of unsupervised feature learning is often to discover low-dimensional features that capture some structure underlying the high-dimensional input data. When the feature learning is performed in an unsupervised way, it enables a form of semisupervised learning where features learned from an unlabeled dataset are then employed to improve performance in a supervised setting with labeled data."}, {"text": "Unsupervised feature learning is learning features from unlabeled data. The goal of unsupervised feature learning is often to discover low-dimensional features that capture some structure underlying the high-dimensional input data. When the feature learning is performed in an unsupervised way, it enables a form of semisupervised learning where features learned from an unlabeled dataset are then employed to improve performance in a supervised setting with labeled data."}, {"text": "Unsupervised feature learning is learning features from unlabeled data. The goal of unsupervised feature learning is often to discover low-dimensional features that capture some structure underlying the high-dimensional input data. When the feature learning is performed in an unsupervised way, it enables a form of semisupervised learning where features learned from an unlabeled dataset are then employed to improve performance in a supervised setting with labeled data."}]}, {"question": "What does normal range mean", "positive_ctxs": [{"text": "Listen to pronunciation. (NOR-mul raynj) In medicine, a set of values that a doctor uses to interpret a patient's test results. The normal range for a given test is based on the results that are seen in 95% of the healthy population."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "where x(G) is the inverse function. In the case where each of the Xi has a standard normal distribution, the mean range is given by"}, {"text": "The normal family of distributions all have the same general shape and are parameterized by mean and standard deviation. That means that if the mean and standard deviation are known and if the distribution is normal, the probability of any future observation lying in a given range is known."}, {"text": "Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors."}, {"text": "Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors."}, {"text": "Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors."}, {"text": "Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors."}, {"text": "Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors."}]}, {"question": "How do you find the covariance of three variables", "positive_ctxs": [{"text": "Now, three variable case it is less clear for me. An intuitive definition for covariance function would be Cov(X,Y,Z)=E[(x\u2212E[X])(y\u2212E[Y])(z\u2212E[Z])], but instead the literature suggests using covariance matrix that is defined as two variable covariance for each pair of variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Brownian covariance is motivated by generalization of the notion of covariance to stochastic processes. The square of the covariance of random variables X and Y can be written in the following form:"}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Feature selection approaches try to find a subset of the input variables (also called features or attributes). The three strategies are: the filter strategy (e.g. information gain), the wrapper strategy (e.g."}, {"text": "Feature selection approaches try to find a subset of the input variables (also called features or attributes). The three strategies are: the filter strategy (e.g. information gain), the wrapper strategy (e.g."}, {"text": "In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the lesser values (that is, the variables tend to show similar behavior), the covariance is positive. In the opposite case, when the greater values of one variable mainly correspond to the lesser values of the other, (that is, the variables tend to show opposite behavior), the covariance is negative."}]}, {"question": "Which is not a linear operator", "positive_ctxs": [{"text": "The simplest example of a non-linear operator (non-linear functional) is a real-valued function of a real argument other than a linear function. Under other restrictions on K(t,s,u) an Urysohn operator acts on other spaces, for instance, L2[a,b] or maps one Orlicz space LM1[a,b] into another LM2[a,b]."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations."}, {"text": "Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations."}, {"text": "A more precise version of the theorem quoted above requires specifying the class of functions on which the convolution is defined, and also requires assuming in addition that S must be a continuous linear operator with respect to the appropriate topology. It is known, for instance, that every continuous translation invariant continuous linear operator on L1 is the convolution with a finite Borel measure. More generally, every continuous translation invariant continuous linear operator on Lp for 1 \u2264 p < \u221e is the convolution with a tempered distribution whose Fourier transform is bounded."}, {"text": "A more precise version of the theorem quoted above requires specifying the class of functions on which the convolution is defined, and also requires assuming in addition that S must be a continuous linear operator with respect to the appropriate topology. It is known, for instance, that every continuous translation invariant continuous linear operator on L1 is the convolution with a finite Borel measure. More generally, every continuous translation invariant continuous linear operator on Lp for 1 \u2264 p < \u221e is the convolution with a tempered distribution whose Fourier transform is bounded."}, {"text": "A more precise version of the theorem quoted above requires specifying the class of functions on which the convolution is defined, and also requires assuming in addition that S must be a continuous linear operator with respect to the appropriate topology. It is known, for instance, that every continuous translation invariant continuous linear operator on L1 is the convolution with a finite Borel measure. More generally, every continuous translation invariant continuous linear operator on Lp for 1 \u2264 p < \u221e is the convolution with a tempered distribution whose Fourier transform is bounded."}, {"text": "A more precise version of the theorem quoted above requires specifying the class of functions on which the convolution is defined, and also requires assuming in addition that S must be a continuous linear operator with respect to the appropriate topology. It is known, for instance, that every continuous translation invariant continuous linear operator on L1 is the convolution with a finite Borel measure. More generally, every continuous translation invariant continuous linear operator on Lp for 1 \u2264 p < \u221e is the convolution with a tempered distribution whose Fourier transform is bounded."}, {"text": "A more precise version of the theorem quoted above requires specifying the class of functions on which the convolution is defined, and also requires assuming in addition that S must be a continuous linear operator with respect to the appropriate topology. It is known, for instance, that every continuous translation invariant continuous linear operator on L1 is the convolution with a finite Borel measure. More generally, every continuous translation invariant continuous linear operator on Lp for 1 \u2264 p < \u221e is the convolution with a tempered distribution whose Fourier transform is bounded."}]}, {"question": "What is random variables in probability", "positive_ctxs": [{"text": "A random variable is a numerical description of the outcome of a statistical experiment. For a discrete random variable, x, the probability distribution is defined by a probability mass function, denoted by f(x). This function provides the probability for each value of the random variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "\"Density function\" itself is also used for the probability mass function, leading to further confusion. In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables."}, {"text": "\"Density function\" itself is also used for the probability mass function, leading to further confusion. In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables."}, {"text": "\"Density function\" itself is also used for the probability mass function, leading to further confusion. In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables."}, {"text": "is called a continuous random variable. In the special case that it is absolutely continuous, its distribution can be described by a probability density function, which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous, a mixture distribution is one such counterexample; such random variables cannot be described by a probability density or a probability mass function."}, {"text": "is called a continuous random variable. In the special case that it is absolutely continuous, its distribution can be described by a probability density function, which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous, a mixture distribution is one such counterexample; such random variables cannot be described by a probability density or a probability mass function."}, {"text": "Convergence in probability defines a topology on the space of random variables over a fixed probability space. This topology is metrizable by the Ky Fan metric:"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}]}, {"question": "What does it mean to control for a variable in multiple regression", "positive_ctxs": [{"text": "Multiple regression estimates how the changes in each predictor variable relate to changes in the response variable. What does it mean to control for the variables in the model? It means that when you look at the effect of one variable in the model, you are holding constant all of the other predictors in the model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, multicollinearity (also collinearity) is a phenomenon in which one predictor variable in a multiple regression model can be linearly predicted from the others with a substantial degree of accuracy. In this situation, the coefficient estimates of the multiple regression may change erratically in response to small changes in the model or the data. Multicollinearity does not reduce the predictive power or reliability of the model as a whole, at least within the sample data set; it only affects calculations regarding individual predictors."}, {"text": "Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term, multivariate linear regression, refers to cases where y is a vector, i.e., the same as general linear regression."}, {"text": "Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term, multivariate linear regression, refers to cases where y is a vector, i.e., the same as general linear regression."}, {"text": "Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term, multivariate linear regression, refers to cases where y is a vector, i.e., the same as general linear regression."}, {"text": "Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term, multivariate linear regression, refers to cases where y is a vector, i.e., the same as general linear regression."}, {"text": "Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term, multivariate linear regression, refers to cases where y is a vector, i.e., the same as general linear regression."}, {"text": "Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term, multivariate linear regression, refers to cases where y is a vector, i.e., the same as general linear regression."}]}, {"question": "What are the measures of similarity in data mining", "positive_ctxs": [{"text": "In a Data Mining sense, the similarity measure is a distance with dimensions describing object features. That means if the distance among two data points is small then there is a high degree of similarity among the objects and vice versa. The similarity is subjective and depends heavily on the context and application."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Not all patterns found by data mining algorithms are necessarily valid. It is common for data mining algorithms to find patterns in the training set which are not present in the general data set. To overcome this, the evaluation uses a test set of data on which the data mining algorithm was not trained."}, {"text": "However, they are identical in generally taking the ratio of Intersection over Union. The Jaccard coefficient measures similarity between finite sample sets, and is defined as the size of the intersection divided by the size of the union of the sample sets:"}, {"text": "Structural information about languages allows for the discovery and implementation of similarity recognition between pairs of text utterances. For instance, it has recently been proven that based on the structural information present in patterns of human discourse, conceptual recurrence plots can be used to model and visualize trends in data and create reliable measures of similarity between natural textual utterances. This technique is a strong tool for further probing the structure of human discourse."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "The difference between data analysis and data mining is that data analysis is used to test models and hypotheses on the dataset, e.g., analyzing the effectiveness of a marketing campaign, regardless of the amount of data; in contrast, data mining uses machine learning and statistical models to uncover clandestine or hidden patterns in a large volume of data.The related terms data dredging, data fishing, and data snooping refer to the use of data mining methods to sample parts of a larger population data set that are (or may be) too small for reliable statistical inferences to be made about the validity of any patterns discovered. These methods can, however, be used in creating new hypotheses to test against the larger data populations."}, {"text": "In statistics and related fields, a similarity measure or similarity function is a real-valued function that quantifies the similarity between two objects. Although no single definition of a similarity measure exists, usually such measures are in some sense the inverse of distance metrics: they take on large values for similar objects and either zero or a negative value for very dissimilar objects."}, {"text": "In statistics and related fields, a similarity measure or similarity function is a real-valued function that quantifies the similarity between two objects. Although no single definition of a similarity measure exists, usually such measures are in some sense the inverse of distance metrics: they take on large values for similar objects and either zero or a negative value for very dissimilar objects."}]}, {"question": "How is KNN algorithm calculated", "positive_ctxs": [{"text": "Here is step by step on how to compute K-nearest neighbors KNN algorithm:Determine parameter K = number of nearest neighbors.Calculate the distance between the query-instance and all the training samples.Sort the distance and determine nearest neighbors based on the K-th minimum distance.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "If we simply compared the methods based on their in-sample error rates, the KNN method would likely appear to perform better, since it is more flexible and hence more prone to overfitting compared to the SVM method."}, {"text": "If we simply compared the methods based on their in-sample error rates, the KNN method would likely appear to perform better, since it is more flexible and hence more prone to overfitting compared to the SVM method."}, {"text": "If we simply compared the methods based on their in-sample error rates, the KNN method would likely appear to perform better, since it is more flexible and hence more prone to overfitting compared to the SVM method."}, {"text": "The rare subspecies is 0.1% of the total population. How likely is the beetle having the pattern to be rare: what is P(Rare | Pattern)?"}, {"text": "The rare subspecies is 0.1% of the total population. How likely is the beetle having the pattern to be rare: what is P(Rare | Pattern)?"}]}, {"question": "How do you determine the intervals for a histogram", "positive_ctxs": [{"text": "2:194:05Suggested clip \u00b7 97 secondsChoosing Intervals for a Histogram - YouTubeYouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "In other words, a histogram represents a frequency distribution by means of rectangles whose widths represent class intervals and whose areas are proportional to the corresponding frequencies: the height of each is the average frequency density for the interval. The intervals are placed together in order to show that the data represented by the histogram, while exclusive, is also contiguous. (E.g., in a histogram it is possible to have two connecting intervals of 10.5\u201320.5 and 20.5\u201333.5, but not two connecting intervals of 10.5\u201320.5 and 22.5\u201332.5."}, {"text": "In other words, a histogram represents a frequency distribution by means of rectangles whose widths represent class intervals and whose areas are proportional to the corresponding frequencies: the height of each is the average frequency density for the interval. The intervals are placed together in order to show that the data represented by the histogram, while exclusive, is also contiguous. (E.g., in a histogram it is possible to have two connecting intervals of 10.5\u201320.5 and 20.5\u201333.5, but not two connecting intervals of 10.5\u201320.5 and 22.5\u201332.5."}, {"text": "As the adjacent bins leave no gaps, the rectangles of a histogram touch each other to indicate that the original variable is continuous.Histograms give a rough sense of the density of the underlying distribution of the data, and often for density estimation: estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all 1, then a histogram is identical to a relative frequency plot."}, {"text": "As the adjacent bins leave no gaps, the rectangles of a histogram touch each other to indicate that the original variable is continuous.Histograms give a rough sense of the density of the underlying distribution of the data, and often for density estimation: estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all 1, then a histogram is identical to a relative frequency plot."}, {"text": "Because of their randomness, you may compute from the sample specific intervals containing the fixed \u03bc with a given probability that you denote confidence."}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}]}, {"question": "How is the threshold value calculated in image processing", "positive_ctxs": [{"text": "Automatic thresholding Select initial threshold value, typically the mean 8-bit value of the original image. Divide the original image into two portions; Pixel values that are less than or equal to the threshold; background. Pixel values greater than the threshold; foreground."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "These climates are characterized by the amount of annual precipitation less than a threshold value which approximates the potential evapotranspiration. The threshold value (in millimeters) is calculated as follows:"}, {"text": "Local methods adapt the threshold value on each pixel to the local image characteristics. In these methods, a different T is selected for each pixel in the image."}, {"text": "Discrete Laplace operator is often used in image processing e.g. in edge detection and motion estimation applications. The discrete Laplacian is defined as the sum of the second derivatives Laplace operator#Coordinate expressions and calculated as sum of differences over the nearest neighbours of the central pixel."}, {"text": "The simplest kind of neural network is a single-layer perceptron network, which consists of a single layer of output nodes; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated in each node, and if the value is above some threshold (typically 0) the neuron fires and takes the activated value (typically 1); otherwise it takes the deactivated value (typically -1). Neurons with this kind of activation function are also called artificial neurons or linear threshold units."}, {"text": "The simplest kind of neural network is a single-layer perceptron network, which consists of a single layer of output nodes; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated in each node, and if the value is above some threshold (typically 0) the neuron fires and takes the activated value (typically 1); otherwise it takes the deactivated value (typically -1). Neurons with this kind of activation function are also called artificial neurons or linear threshold units."}, {"text": "If the difference between the previous threshold value and the new threshold value are below a specified limit, you are finished. Otherwise apply the new threshold to the original image keep trying."}, {"text": "the threshold becomes shart and spike firing occurs deterministically at the moment when the membrane potential hits the threshold from below. The sharpness value found in experiments is"}]}, {"question": "How do you detect data Drifting", "positive_ctxs": [{"text": "Since both drifts involve a statistical change in the data, the best approach to detect them is by monitoring its statistical properties, the model's predictions, and their correlation with other factors."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "What is the delta rule of Adaline network", "positive_ctxs": [{"text": "The process of adjusting the weights and threshold of the ADALINE network is based on a learning algorithm named the Delta rule (Widrow and Hoff 1960) or Widrow-Hoff learning rule, also known as LMS (Least Mean Square ) algorithm or Gradient Descent method."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The delta rule is derived by attempting to minimize the error in the output of the neural network through gradient descent. The error for a neural network with"}, {"text": "In machine learning, the delta rule is a gradient descent learning rule for updating the weights of the inputs to artificial neurons in a single-layer neural network. It is a special case of the more general backpropagation algorithm."}, {"text": "While the delta rule is similar to the perceptron's update rule, the derivation is different. The perceptron uses the Heaviside step function as the activation function"}, {"text": "is the time constant of adaptation current wk, Em is the resting potential and tf is the firing time of the neuron and the Greek delta denotes the Dirac delta function. Whenever the voltage reaches the firing threshold the voltage is reset to a value Vr below the firing threshold. The reset value is one of the important parameters of the model."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Another common practice is to represent discrete sequences with square brackets; thus: \u03b4[n]. The Kronecker delta is not the result of directly sampling the Dirac delta function."}, {"text": "This rule was introduced by Amos Storkey in 1997 and is both local and incremental. Storkey also showed that a Hopfield network trained using this rule has a greater capacity than a corresponding network trained using the Hebbian rule. The weight matrix of an attractor neural network is said to follow the Storkey learning rule if it obeys:"}]}, {"question": "Why do we use negative log likelihood", "positive_ctxs": [{"text": "It's a cost function that is used as loss for machine learning models, telling us how bad it's performing, the lower the better. Also it's much easier to reason about the loss this way, to be consistent with the rule of loss functions approaching 0 as the model gets better."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Thus, the Fisher information is the negative of the expectation of the second derivative with respect to the parameter \u03b1 of the log likelihood function. Therefore, Fisher information is a measure of the curvature of the log likelihood function of \u03b1. A low curvature (and therefore high radius of curvature), flatter log likelihood function curve has low Fisher information; while a log likelihood function curve with large curvature (and therefore low radius of curvature) has high Fisher information."}, {"text": "In the above equation, D represents the deviance and ln represents the natural logarithm. The log of this likelihood ratio (the ratio of the fitted model to the saturated model) will produce a negative value, hence the need for a negative sign. D can be shown to follow an approximate chi-squared distribution."}, {"text": "In the above equation, D represents the deviance and ln represents the natural logarithm. The log of this likelihood ratio (the ratio of the fitted model to the saturated model) will produce a negative value, hence the need for a negative sign. D can be shown to follow an approximate chi-squared distribution."}, {"text": "In the above equation, D represents the deviance and ln represents the natural logarithm. The log of this likelihood ratio (the ratio of the fitted model to the saturated model) will produce a negative value, hence the need for a negative sign. D can be shown to follow an approximate chi-squared distribution."}, {"text": "As is also the case for maximum likelihood estimates for the gamma distribution, the maximum likelihood estimates for the beta distribution do not have a general closed form solution for arbitrary values of the shape parameters. If X1, ..., XN are independent random variables each having a beta distribution, the joint log likelihood function for N iid observations is:"}, {"text": "One approach to inference uses large sample approximations to the sampling distribution of the log odds ratio (the natural logarithm of the odds ratio). If we use the joint probability notation defined above, the population log odds ratio is"}, {"text": "Below are the likelihood and log likelihood functions for a type I tobit. This is a tobit that is censored from below at"}]}, {"question": "Is Hopfield network supervised or unsupervised", "positive_ctxs": [{"text": "The learning algorithm of the Hopfield network is unsupervised, meaning that there is no \u201cteacher\u201d telling the network what is the correct output for a certain input."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Hopfield and Tank presented the Hopfield network application in solving the classical traveling-salesman problem in 1985. Since then, the Hopfield network has been widely used for optimization. The idea of using the Hopfield network in optimization problems is straightforward: If a constrained/unconstrained cost function can be written in the form of the Hopfield energy function E, then there exists a Hopfield network whose equilibrium points represent solutions to the constrained/unconstrained optimization problem."}, {"text": "Similar ideas have been used in feed-forward neural networks for unsupervised pre-training to structure a neural network, making it first learn generally useful feature detectors. Then the network is trained further by supervised backpropagation to classify labeled data. The deep belief network model by Hinton et al."}, {"text": "Similar ideas have been used in feed-forward neural networks for unsupervised pre-training to structure a neural network, making it first learn generally useful feature detectors. Then the network is trained further by supervised backpropagation to classify labeled data. The deep belief network model by Hinton et al."}, {"text": "A Hopfield network (or Ising model of a neural network or Ising\u2013Lenz\u2013Little model) is a form of recurrent artificial neural network popularized by John Hopfield in 1982, but described earlier by Little in 1974 based on Ernst Ising's work with Wilhelm Lenz. Hopfield networks serve as content-addressable (\"associative\") memory systems with binary threshold nodes. They are guaranteed to converge to a local minimum, and can therefore store and recall multiple memories, but they may also converge to a false pattern (wrong local minimum) rather than a stored pattern (expected local minimum) if the input is too dissimilar from any memory."}, {"text": "If the connections are trained using Hebbian learning then the Hopfield network can perform as robust content-addressable memory, resistant to connection alteration."}, {"text": "If the connections are trained using Hebbian learning then the Hopfield network can perform as robust content-addressable memory, resistant to connection alteration."}, {"text": "If the connections are trained using Hebbian learning then the Hopfield network can perform as robust content-addressable memory, resistant to connection alteration."}]}, {"question": "What is the difference between precision and recall", "positive_ctxs": [{"text": "Recall is the number of relevant documents retrieved by a search divided by the total number of existing relevant documents, while precision is the number of relevant documents retrieved by a search divided by the total number of documents retrieved by that search."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In pattern recognition, information retrieval and classification (machine learning), precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances, while recall (also known as sensitivity) is the fraction of relevant instances that were retrieved. Both precision and recall are therefore based on relevance."}, {"text": "In pattern recognition, information retrieval and classification (machine learning), precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances, while recall (also known as sensitivity) is the fraction of relevant instances that were retrieved. Both precision and recall are therefore based on relevance."}, {"text": "In pattern recognition, information retrieval and classification (machine learning), precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances, while recall (also known as sensitivity) is the fraction of relevant instances that were retrieved. Both precision and recall are therefore based on relevance."}, {"text": "In pattern recognition, information retrieval and classification (machine learning), precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances, while recall (also known as sensitivity) is the fraction of relevant instances that were retrieved. Both precision and recall are therefore based on relevance."}, {"text": "The highest possible value of an F-score is 1, indicating perfect precision and recall, and the lowest possible value is 0, if either the precision or the recall is zero. The F1 score is also known as the S\u00f8rensen\u2013Dice coefficient or Dice similarity coefficient (DSC)."}, {"text": ", and is thus also known as the G-measure, while the F-measure is their harmonic mean. Moreover, precision and recall are also known as Wallace's indices"}, {"text": ", and is thus also known as the G-measure, while the F-measure is their harmonic mean. Moreover, precision and recall are also known as Wallace's indices"}]}, {"question": "Why do we Standardise normal distribution", "positive_ctxs": [{"text": "So that we only have to have one area table, rather than an infinite number of area tables. Of course, technology can find area under any normal curve and so tables of values are a bit archaic."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Suppose that we want to compare two models: one with a normal distribution of y and one with a normal distribution of log(y). We should not directly compare the AIC values of the two models. Instead, we should transform the normal cumulative distribution function to first take the logarithm of y."}, {"text": "follows the standard normal distribution N(0,1), then the rejection of this null hypothesis could mean that (i) the mean is not 0, or (ii) the variance is not 1, or (iii) the distribution is not normal. Different tests of the same null hypothesis would be more or less sensitive to different alternatives. Anyway, if we do manage to reject the null hypothesis, even if we know the distribution is normal and variance is 1, the null hypothesis test does not tell us which non-zero values of the mean are now most plausible."}, {"text": "It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample"}, {"text": "It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample"}, {"text": "It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample"}, {"text": "It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample"}, {"text": "It is often the case that we do not know the parameters of the normal distribution, but instead want to estimate them. That is, having a sample"}]}, {"question": "How do you rank data for the Kruskal Wallis test", "positive_ctxs": [{"text": "When working with a measurement variable, the Kruskal\u2013Wallis test starts by substituting the rank in the overall data set for each measurement value. The smallest value gets a rank of 1, the second-smallest gets a rank of 2, etc."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "\"You cannot legitimately test a hypothesis on the same data that first suggested that hypothesis. Once you have a hypothesis, design a study to search specifically for the effect you now think is there. If the result of this test is statistically significant, you have real evidence at last.\""}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "The logrank statistic can be used when observations are censored. If censored observations are not present in the data then the Wilcoxon rank sum test is appropriate."}, {"text": "The logrank statistic can be used when observations are censored. If censored observations are not present in the data then the Wilcoxon rank sum test is appropriate."}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}]}, {"question": "When should I use Poisson distribution", "positive_ctxs": [{"text": "The Poisson distribution is used to describe the distribution of rare events in a large population. For example, at any particular time, there is a certain probability that a particular cell within a large population of cells will acquire a mutation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": ":219:14-15:193:157 This makes it an example of Stigler's law and it has prompted some authors to argue that the Poisson distribution should bear the name of de Moivre.In 1860, Simon Newcomb fitted the Poisson distribution to the number of stars found in a unit of space."}, {"text": "The Poisson distribution is a special case of the discrete compound Poisson distribution (or stuttering Poisson distribution) with only a parameter. The discrete compound Poisson distribution can be deduced from the limiting distribution of univariate multinomial distribution. It is also a special case of a compound Poisson distribution."}, {"text": "The probability distribution of the number of fixed points in a uniformly distributed random permutation approaches a Poisson distribution with expected value 1 as n grows. In particular, it is an elegant application of the inclusion\u2013exclusion principle to show that the probability that there are no fixed points approaches 1/e. When n is big enough, the probability distribution of fixed points is almost the Poisson distribution with expected value 1."}, {"text": "When generating a single bootstrap sample, instead of randomly drawing from the sample data with replacement, each data point is assigned a random weight distributed according to the Poisson distribution with"}, {"text": "All of the cumulants of the Poisson distribution are equal to the expected value \u03bb. The nth factorial moment of the Poisson distribution is \u03bbn."}, {"text": "In other words, the alternatively parameterized negative binomial distribution converges to the Poisson distribution and r controls the deviation from the Poisson. This makes the negative binomial distribution suitable as a robust alternative to the Poisson, which approaches the Poisson for large r, but which has larger variance than the Poisson for small r."}, {"text": "In other words, the alternatively parameterized negative binomial distribution converges to the Poisson distribution and r controls the deviation from the Poisson. This makes the negative binomial distribution suitable as a robust alternative to the Poisson, which approaches the Poisson for large r, but which has larger variance than the Poisson for small r."}]}, {"question": "What is the difference between class interval and class boundary", "positive_ctxs": [{"text": "In class limit, the upper extreme value of the first class interval and the lower extreme value of the next class interval will not be equal. In class boundary, the upper extreme value of the first class interval and the lower extreme value of the next class interval will be equal."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In musical set theory, an interval class (often abbreviated: ic), also known as unordered pitch-class interval, interval distance, undirected interval, or \"(even completely incorrectly) as 'interval mod 6'\" (Rahn 1980, 29; Whittall 2008, 273\u201374), is the shortest distance in pitch class space between two unordered pitch classes. For example, the interval class between pitch classes 4 and 9 is 5 because 9 \u2212 4 = 5 is less than 4 \u2212 9 = \u22125 \u2261 7 (mod 12). See modular arithmetic for more on modulo 12."}, {"text": "(assuming the class intervals are the same for all classes).Generally the class interval or class width is the same for all classes. The classes all taken together must cover at least the distance from the lowest value (minimum) in the data to the highest (maximum) value. Equal class intervals are preferred in frequency distribution, while unequal class intervals (for example logarithmic intervals) may be necessary in certain situations to produce a good spread of observations between the classes and avoid a large number of empty, or almost empty classes."}, {"text": "(assuming the class intervals are the same for all classes).Generally the class interval or class width is the same for all classes. The classes all taken together must cover at least the distance from the lowest value (minimum) in the data to the highest (maximum) value. Equal class intervals are preferred in frequency distribution, while unequal class intervals (for example logarithmic intervals) may be necessary in certain situations to produce a good spread of observations between the classes and avoid a large number of empty, or almost empty classes."}, {"text": "The underlying issue is that there is a class imbalance between the positive class and the negative class. Prior probabilities for these classes need to be accounted for in error analysis. Precision and recall help, but precision too can be biased by very unbalanced class priors in the test sets."}, {"text": "Calculate the range of the data (Range = Max \u2013 Min) by finding the minimum and maximum data values. Range will be used to determine the class interval or class width."}, {"text": "Calculate the range of the data (Range = Max \u2013 Min) by finding the minimum and maximum data values. Range will be used to determine the class interval or class width."}, {"text": "In a statistical-classification problem with two classes, a decision boundary or decision surface is a hypersurface that partitions the underlying vector space into two sets, one for each class. The classifier will classify all the points on one side of the decision boundary as belonging to one class and all those on the other side as belonging to the other class."}]}, {"question": "What is convolutional neural network algorithm", "positive_ctxs": [{"text": "A Convolutional Neural Network (ConvNet/CNN) is a Deep Learning algorithm which can take in an input image, assign importance (learnable weights and biases) to various aspects/objects in the image and be able to differentiate one from the other."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "LeNet is a convolutional neural network structure proposed by Yann LeCun et al. In general, LeNet refers to lenet-5 and is a simple convolutional neural network. Convolutional neural networks are a kind of feed-forward neural network whose artificial neurons can respond to a part of the surrounding cells in the coverage range and perform well in large-scale image processing."}, {"text": "The penetrating face product is used in the tensor-matrix theory of digital antenna arrays. This operation can also be used in artificial neural network models, specifically convolutional layers."}, {"text": "Matlab: The neural network toolbox has explicit functionality designed to produce a time delay neural network give the step size of time delays and an optional training function. The default training algorithm is a Supervised Learning back-propagation algorithm that updates filter weights based on the Levenberg-Marquardt optimizations. The function is timedelaynet(delays, hidden_layers, train_fnc) and returns a time-delay neural network architecture that a user can train and provide inputs to."}, {"text": "The term receptive field is also used in the context of artificial neural networks, most often in relation to convolutional neural networks (CNNs). So, in a neural network context, the receptive field is defined as the size of the region in the input that produces the feature. Basically, it is a measure of association of an output feature (of any layer) to the input region (patch)."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Another neural gas variant inspired in the GNG algorithm is the incremental growing neural gas (IGNG). The authors propose the main advantage of this algorithm to be \"learning new data (plasticity) without degrading the previously trained network and forgetting the old input data (stability).\""}, {"text": "The DeepMind system used a deep convolutional neural network, with layers of tiled convolutional filters to mimic the effects of receptive fields. Reinforcement learning is unstable or divergent when a nonlinear function approximator such as a neural network is used to represent Q. This instability comes from the correlations present in the sequence of observations, the fact that small updates to Q may significantly change the policy and the data distribution, and the correlations between Q and the target values."}]}, {"question": "Is Anova Multivariate analysis", "positive_ctxs": [{"text": "Multivariate ANOVA (MANOVA) extends the capabilities of analysis of variance (ANOVA) by assessing multiple dependent variables simultaneously. ANOVA statistically tests the differences between three or more group means. This statistical procedure tests multiple dependent variables at the same time."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Multivariate analysis of the document-term matrix can reveal topics/themes of the corpus. Specifically, latent semantic analysis and data clustering can be used, and more recently probabilistic latent semantic analysis and non-negative matrix factorization have been found to perform well for this task."}, {"text": "Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable. The application of multivariate statistics is multivariate analysis."}, {"text": "Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable. The application of multivariate statistics is multivariate analysis."}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "Garson, G. David, \"Factor Analysis,\" from Statnotes: Topics in Multivariate Analysis. Retrieved on April 13, 2009 from StatNotes: Topics in Multivariate Analysis, from G. David Garson at North Carolina State University, Public Administration Program"}, {"text": "Multivariate analysis (MVA) is based on the principles of multivariate statistics, which involves observation and analysis of more than one statistical outcome variable at a time. Typically, MVA is used to address the situations where multiple measurements are made on each experimental unit and the relations among these measurements and their structures are important. A modern, overlapping categorization of MVA includes:"}]}, {"question": "Is logistic regression guaranteed to converge", "positive_ctxs": [{"text": "A frequent problem in estimating logistic regression models is a failure of the likelihood maximization algorithm to converge. In most cases, this failure is a consequence of data patterns known as complete or quasi-complete separation. Log-likelihood as a function of the slope, quasi-complete separation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "tends to zero with probability 1 when the number of played rounds tends to infinity. Intuitively, zero-regret strategies are guaranteed to converge to a (not necessarily unique) optimal strategy if enough rounds are played."}, {"text": "tends to zero with probability 1 when the number of played rounds tends to infinity. Intuitively, zero-regret strategies are guaranteed to converge to a (not necessarily unique) optimal strategy if enough rounds are played."}, {"text": "BFGS method is not guaranteed to converge unless the function has a quadratic Taylor expansion near an optimum. However, BFGS can have acceptable performance even for non-smooth optimization instances"}, {"text": "BFGS method is not guaranteed to converge unless the function has a quadratic Taylor expansion near an optimum. However, BFGS can have acceptable performance even for non-smooth optimization instances"}, {"text": "BFGS method is not guaranteed to converge unless the function has a quadratic Taylor expansion near an optimum. However, BFGS can have acceptable performance even for non-smooth optimization instances"}, {"text": "The training process can be guaranteed to converge in one step with a new batch of data, and the computational complexity of the training algorithm is linear with respect to the number of neurons involved."}, {"text": "The training process can be guaranteed to converge in one step with a new batch of data, and the computational complexity of the training algorithm is linear with respect to the number of neurons involved."}]}, {"question": "Which of the following are examples of active learning", "positive_ctxs": [{"text": "Group projects, discussions, and writing are examples of active learning, because they involve doing something."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Active learning: Instead of assuming that all of the training examples are given at the start, active learning algorithms interactively collect new examples, typically by making queries to a human user. Often, the queries are based on unlabeled data, which is a scenario that combines semi-supervised learning with active learning."}, {"text": "Active learning: Instead of assuming that all of the training examples are given at the start, active learning algorithms interactively collect new examples, typically by making queries to a human user. Often, the queries are based on unlabeled data, which is a scenario that combines semi-supervised learning with active learning."}, {"text": "The use of multimedia and technology tools helps enhance the atmosphere of the classroom, thus enhancing the active learning experience. In this way, each student actively engages in the learning process. Teachers can use movies, videos, games, and other fun activities to enhance the effectiveness of the active learning process."}, {"text": "There are a wide range of alternatives for the term active learning, such as: learning through play, technology-based learning, activity-based learning, group work, project method, etc. The common factors in these are some significant qualities and characteristics of active learning. Active learning is the opposite of passive learning; it is learner-centered, not teacher-centered, and requires more than just listening; the active participation of each and every student is a necessary aspect in active learning."}, {"text": "Active learning is \"a method of learning in which students are actively or experientially involved in the learning process and where there are different levels of active learning, depending on student involvement.\" Bonwell & Eison (1991) states that \"students participate [in active learning] when they are doing something besides passively listening.\" In a report from the Association for the Study of Higher Education (ASHE), authors discuss a variety of methodologies for promoting active learning."}, {"text": "Another sort of conditional, the counterfactual conditional, has a stronger connection with causality, yet even counterfactual statements are not all examples of causality. Consider the following two statements:"}, {"text": "The following two examples use the Nearest Rank definition of quantile with rounding. For an explanation of this definition, see percentiles."}]}, {"question": "What is systematic sampling example", "positive_ctxs": [{"text": "As a hypothetical example of systematic sampling, assume that in a population of 10,000 people, a statistician selects every 100th person for sampling. The sampling intervals can also be systematic, such as choosing a new sample to draw from every 12 hours."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "samples with at least two elements adjacent to each other will never be chosen by systematic sampling). It is however, much more efficient (if variance within systematic sample is more than variance of population).Systematic sampling is to be applied only if the given population is logically homogeneous, because systematic sample units are uniformly distributed over the population. The researcher must ensure that the chosen sampling interval does not hide a pattern."}, {"text": "samples with at least two elements adjacent to each other will never be chosen by systematic sampling). It is however, much more efficient (if variance within systematic sample is more than variance of population).Systematic sampling is to be applied only if the given population is logically homogeneous, because systematic sample units are uniformly distributed over the population. The researcher must ensure that the chosen sampling interval does not hide a pattern."}, {"text": "Another drawback of systematic sampling is that even in scenarios where it is more accurate than SRS, its theoretical properties make it difficult to quantify that accuracy. (In the two examples of systematic sampling that are given above, much of the potential sampling error is due to variation between neighbouring houses \u2013 but because this method never selects two neighbouring houses, the sample will not give us any information on that variation.)"}, {"text": "Another drawback of systematic sampling is that even in scenarios where it is more accurate than SRS, its theoretical properties make it difficult to quantify that accuracy. (In the two examples of systematic sampling that are given above, much of the potential sampling error is due to variation between neighbouring houses \u2013 but because this method never selects two neighbouring houses, the sample will not give us any information on that variation.)"}, {"text": "Systematic sampling is a statistical method involving the selection of elements from an ordered sampling frame. The most common form of systematic sampling is an equiprobability method. In this approach, progression through the list is treated circularly, with a return to the top once the end of the list is passed."}, {"text": "Systematic sampling is a statistical method involving the selection of elements from an ordered sampling frame. The most common form of systematic sampling is an equiprobability method. In this approach, progression through the list is treated circularly, with a return to the top once the end of the list is passed."}, {"text": "As long as the starting point is randomized, systematic sampling is a type of probability sampling. It is easy to implement and the stratification induced can make it efficient, if the variable by which the list is ordered is correlated with the variable of interest. 'Every 10th' sampling is especially useful for efficient sampling from databases."}]}, {"question": "What is regret in reinforcement learning", "positive_ctxs": [{"text": "Mathematically speaking, the regret is expressed as the difference between the payoff (reward or return) of a possible action and the payoff of the action that has been actually taken. If we denote the payoff function as u the formula becomes: regret = u(possible action) - u(action taken)"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The theory of regret aversion or anticipated regret proposes that when facing a decision, individuals might anticipate regret and thus incorporate in their choice their desire to eliminate or reduce this possibility. Regret is a negative emotion with a powerful social and reputational component, and is central to how humans learn from experience and to the human psychology of risk aversion. Conscious anticipation of regret creates a feedback loop that elevates regret from the emotional realm\u2014often modeled as mere human behavior\u2014into the realm of the rational choice behavior that is modeled in decision theory."}, {"text": "Due to its generality, reinforcement learning is studied in many disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, and statistics. In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment."}, {"text": "Due to its generality, reinforcement learning is studied in many disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, and statistics. In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment."}, {"text": "Along with rising interest in neural networks beginning in the mid 1980s, interest grew in deep reinforcement learning where a neural network is used to represent policies or value functions. As in such a system, the entire decision making process from sensors to motors in a robot or agent involves a single layered neural network, it is sometimes called end-to-end reinforcement learning. One of the first successful applications of reinforcement learning with neural networks was TD-Gammon, a computer program developed in 1992 for playing backgammon."}, {"text": "The recommendation problem can be seen as a special instance of a reinforcement learning problem whereby the user is the environment upon which the agent, the recommendation system acts upon in order to receive a reward, for instance, a click or engagement by the user. One aspect of reinforcement learning that is of particular use in the area of recommender systems is the fact that the models or policies can be learned by providing a reward to the recommendation agent. This is in contrast to traditional learning techniques which rely on supervised learning approaches that are less flexible, reinforcement learning recommendation techniques allow to potentially train models that can be optimized directly on metrics of engagement, and user interest."}, {"text": "A deep Q-network (DQN) is a type of deep learning model that combines a deep neural network with Q-learning, a form of reinforcement learning. Unlike earlier reinforcement learning agents, DQNs that utilize CNNs can learn directly from high-dimensional sensory inputs via reinforcement learning.Preliminary results were presented in 2014, with an accompanying paper in February 2015. The research described an application to Atari 2600 gaming."}, {"text": "A deep Q-network (DQN) is a type of deep learning model that combines a deep neural network with Q-learning, a form of reinforcement learning. Unlike earlier reinforcement learning agents, DQNs that utilize CNNs can learn directly from high-dimensional sensory inputs via reinforcement learning.Preliminary results were presented in 2014, with an accompanying paper in February 2015. The research described an application to Atari 2600 gaming."}]}, {"question": "What is localization in deep learning", "positive_ctxs": [{"text": "Classification/Recognition: Given an image with an object , find out what that object is. In other words, classify it in a class from a set of predefined categories. Localization : Find where the object is and draw a bounding box around it."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In such situations, the particle filter can give better performance than parametric filters.Another non-parametric approach to Markov localization is the grid-based localization, which uses a histogram to represent the belief distribution. Compared with the grid-based approach, the Monte Carlo localization is more accurate because the state represented in samples is not discretized."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}]}, {"question": "What is partitioning of data", "positive_ctxs": [{"text": "Definition. Data Partitioning is the technique of distributing data across multiple tables, disks, or sites in order to improve query processing performance or increase database manageability."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "This type of partitioning is also called \"row splitting\", since rows get split by their columns, and might be performed explicitly or implicitly. Distinct physical machines might be used to realize vertical partitioning: Storing infrequently used or very wide columns, taking up a significant amount of memory, on a different machine, for example, is a method of vertical partitioning. A common form of vertical partitioning is to split static data from dynamic data, since the former is faster to access than the latter, particularly for a table where the dynamic data is not used as often as the static."}, {"text": "Consider the ordered list {1,2,3,4} which contains four data values. What is the 75th percentile of this list using the Microsoft Excel method?"}, {"text": "For massive data sets, it is often computationally prohibitive to hold all the sample data in memory and resample from the sample data. The Bag of Little Bootstraps (BLB) provides a method of pre-aggregating data before bootstrapping to reduce computational constraints. This works by partitioning the data set into"}, {"text": "Consider the ordered list {15, 20, 35, 40, 50}, which contains five data values. What is the 40th percentile of this list using this variant method?"}, {"text": "Consider the ordered list {15, 20, 35, 40, 50}, which contains five data values. What is the 40th percentile of this list using the NIST method?"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}]}, {"question": "What does gamma distribution mean", "positive_ctxs": [{"text": "Definition: Gamma distribution is a distribution that arises naturally in processes for which the waiting times between events are relevant. It can be thought of as a waiting time between Poisson distributed events."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If the shape parameter of the gamma distribution is known, but the inverse-scale parameter is unknown, then a gamma distribution for the inverse scale forms a conjugate prior. The compound distribution, which results from integrating out the inverse scale, has a closed-form solution, known as the compound gamma distribution.If instead the shape parameter is known but the mean is unknown, with the prior of the mean being given by another gamma distribution, then it results in K-distribution."}, {"text": "In oncology, the age distribution of cancer incidence often follows the gamma distribution, whereas the shape and scale parameters predict, respectively, the number of driver events and the time interval between them.In neuroscience, the gamma distribution is often used to describe the distribution of inter-spike intervals.In bacterial gene expression, the copy number of a constitutively expressed protein often follows the gamma distribution, where the scale and shape parameter are, respectively, the mean number of bursts per cell cycle and the mean number of protein molecules produced by a single mRNA during its lifetime.In genomics, the gamma distribution was applied in peak calling step (i.e. in recognition of signal) in ChIP-chip and ChIP-seq data analysis."}, {"text": "In this case, VB would compute optimum estimates of the four parameters of the normal-scaled inverse gamma distribution that describes the joint distribution of the mean and variance of the component."}, {"text": "with known mean \u03bc, the conjugate prior of the variance has an inverse gamma distribution or a scaled inverse chi-squared distribution. The two are equivalent except for having different parameterizations. Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience."}, {"text": "with known mean \u03bc, the conjugate prior of the variance has an inverse gamma distribution or a scaled inverse chi-squared distribution. The two are equivalent except for having different parameterizations. Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience."}, {"text": "with known mean \u03bc, the conjugate prior of the variance has an inverse gamma distribution or a scaled inverse chi-squared distribution. The two are equivalent except for having different parameterizations. Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience."}, {"text": "with known mean \u03bc, the conjugate prior of the variance has an inverse gamma distribution or a scaled inverse chi-squared distribution. The two are equivalent except for having different parameterizations. Although the inverse gamma is more commonly used, we use the scaled inverse chi-squared for the sake of convenience."}]}, {"question": "What is PLS in statistics", "positive_ctxs": [{"text": "Partial least squares regression (PLS regression) is a statistical method that bears some relation to principal components regression; instead of finding hyperplanes of maximum variance between the response and independent variables, it finds a linear regression model by projecting the predicted variables and the"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Although the original applications were in the social sciences, PLS regression is today most widely used in chemometrics and related areas. It is also used in bioinformatics, sensometrics, neuroscience, and anthropology."}, {"text": "Although the original applications were in the social sciences, PLS regression is today most widely used in chemometrics and related areas. It is also used in bioinformatics, sensometrics, neuroscience, and anthropology."}]}, {"question": "How do you write logistic regression results", "positive_ctxs": [{"text": "Writing up resultsFirst, present descriptive statistics in a table. Organize your results in a table (see Table 3) stating your dependent variable (dependent variable = YES) and state that these are \"logistic regression results.\" When describing the statistics in the tables, point out the highlights for the reader.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Having a large ratio of variables to cases results in an overly conservative Wald statistic (discussed below) and can lead to non-convergence. Regularized logistic regression is specifically intended to be used in this situation."}, {"text": "Having a large ratio of variables to cases results in an overly conservative Wald statistic (discussed below) and can lead to non-convergence. Regularized logistic regression is specifically intended to be used in this situation."}, {"text": "Having a large ratio of variables to cases results in an overly conservative Wald statistic (discussed below) and can lead to non-convergence. Regularized logistic regression is specifically intended to be used in this situation."}]}, {"question": "What is a probability distribution in statistics", "positive_ctxs": [{"text": "A probability distribution is a statistical function that describes all the possible values and likelihoods that a random variable can take within a given range. These factors include the distribution's mean (average), standard deviation, skewness, and kurtosis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Parametric statistics is a branch of statistics which assumes that sample data comes from a population that can be adequately modeled by a probability distribution that has a fixed set of parameters. Conversely a non-parametric model differs precisely in that it makes no assumptions about a parametric distribution when modeling the data."}, {"text": "In statistics, a unimodal probability distribution or unimodal distribution is a probability distribution which has a single peak. The term \"mode\" in this context refers to any peak of the distribution, not just to the strict definition of mode which is usual in statistics."}]}, {"question": "Why is the formula of sample variance different from population variance", "positive_ctxs": [{"text": "The sample variance is an estimator for the population variance. When applied to sample data, the population variance formula is a biased estimator of the population variance: it tends to underestimate the amount of variability. We are using one fitted value (sample mean) in our estimate of the variance."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In other words, the expected value of the uncorrected sample variance does not equal the population variance \u03c32, unless multiplied by a normalization factor. The sample mean, on the other hand, is an unbiased estimator of the population mean \u03bc.Note that the usual definition of sample variance is"}, {"text": "In other words, the expected value of the uncorrected sample variance does not equal the population variance \u03c32, unless multiplied by a normalization factor. The sample mean, on the other hand, is an unbiased estimator of the population mean \u03bc.Note that the usual definition of sample variance is"}, {"text": "Mathematically, the variance of the sampling distribution obtained is equal to the variance of the population divided by the sample size. This is because as the sample size increases, sample means cluster more closely around the population mean."}, {"text": "Mathematically, the variance of the sampling distribution obtained is equal to the variance of the population divided by the sample size. This is because as the sample size increases, sample means cluster more closely around the population mean."}, {"text": "Mathematically, the variance of the sampling distribution obtained is equal to the variance of the population divided by the sample size. This is because as the sample size increases, sample means cluster more closely around the population mean."}, {"text": "That is, the variance of the mean decreases when n increases. This formula for the variance of the mean is used in the definition of the standard error of the sample mean, which is used in the central limit theorem."}, {"text": "That is, the variance of the mean decreases when n increases. This formula for the variance of the mean is used in the definition of the standard error of the sample mean, which is used in the central limit theorem."}]}, {"question": "What is the correlation between two independent random variables", "positive_ctxs": [{"text": "Correlation measures linearity between X and Y. If \u03c1(X,Y) = 0 we say that X and Y are \u201cuncorrelated.\u201d If two variables are independent, then their correlation will be 0. However, like with covariance."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics and in probability theory, distance correlation or distance covariance is a measure of dependence between two paired random vectors of arbitrary, not necessarily equal, dimension. The population distance correlation coefficient is zero if and only if the random vectors are independent. Thus, distance correlation measures both linear and nonlinear association between two random variables or random vectors."}, {"text": "In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed. If we are interested in finding to what extent there is a numerical relationship between two variables of interest, using their correlation coefficient will give misleading results if there is another, confounding, variable that is numerically related to both variables of interest. This misleading information can be avoided by controlling for the confounding variable, which is done by computing the partial correlation coefficient."}, {"text": "In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. In the broadest sense correlation is any statistical association, though it commonly refers to the degree to which a pair of variables are linearly related."}, {"text": "In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. In the broadest sense correlation is any statistical association, though it commonly refers to the degree to which a pair of variables are linearly related."}, {"text": "The Spearman correlation between two variables is equal to the Pearson correlation between the rank values of those two variables; while Pearson's correlation assesses linear relationships, Spearman's correlation assesses monotonic relationships (whether linear or not). If there are no repeated data values, a perfect Spearman correlation of +1 or \u22121 occurs when each of the variables is a perfect monotone function of the other."}, {"text": "When two or more random variables are defined on a probability space, it is useful to describe how they vary together; that is, it is useful to measure the relationship between the variables. A common measure of the relationship between two random variables is the covariance. Covariance is a measure of linear relationship between the random variables."}, {"text": "A simple way to compute the sample partial correlation for some data is to solve the two associated linear regression problems, get the residuals, and calculate the correlation between the residuals. Let X and Y be, as above, random variables taking real values, and let Z be the n-dimensional vector-valued random variable. We write xi, yi and zi to denote the ith of N i.i.d."}]}, {"question": "What is noise in signal detection theory", "positive_ctxs": [{"text": "Detection theory or signal detection theory is a means to measure the ability to differentiate between information-bearing patterns (called stimulus in living organisms, signal in machines) and random patterns that distract from the information (called noise, consisting of background stimuli and random activity of the"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The sensitivity index or d' (pronounced 'dee-prime') is a statistic used in signal detection theory. It provides the separation between the means of the signal and the noise distributions, compared against the standard deviation of the noise distribution. For normally distributed signal and noise with mean and standard deviations"}, {"text": "The sensitivity index or d' (pronounced 'dee-prime') is a statistic used in signal detection theory. It provides the separation between the means of the signal and the noise distributions, compared against the standard deviation of the noise distribution. For normally distributed signal and noise with mean and standard deviations"}, {"text": "The sensitivity index or d' (pronounced 'dee-prime') is a statistic used in signal detection theory. It provides the separation between the means of the signal and the noise distributions, compared against the standard deviation of the noise distribution. For normally distributed signal and noise with mean and standard deviations"}, {"text": "The sensitivity index or d' (pronounced 'dee-prime') is a statistic used in signal detection theory. It provides the separation between the means of the signal and the noise distributions, compared against the standard deviation of the noise distribution. For normally distributed signal and noise with mean and standard deviations"}, {"text": "The sensitivity index or d' (pronounced 'dee-prime') is a statistic used in signal detection theory. It provides the separation between the means of the signal and the noise distributions, compared against the standard deviation of the noise distribution. For normally distributed signal and noise with mean and standard deviations"}, {"text": "The median filter is a non-linear digital filtering technique, often used to remove noise from an image or signal. Such noise reduction is a typical pre-processing step to improve the results of later processing (for example, edge detection on an image). Median filtering is very widely used in digital image processing because, under certain conditions, it preserves edges while removing noise (but see the discussion below), also having applications in signal processing."}, {"text": "Detection theory or signal detection theory is a means to measure the ability to differentiate between information-bearing patterns (called stimulus in living organisms, signal in machines) and random patterns that distract from the information (called noise, consisting of background stimuli and random activity of the detection machine and of the nervous system of the operator). In the field of electronics, the separation of such patterns from a disguising background is referred to as signal recovery.According to the theory, there are a number of determiners of how a detecting system will detect a signal, and where its threshold levels will be. The theory can explain how changing the threshold will affect the ability to discern, often exposing how adapted the system is to the task, purpose or goal at which it is aimed."}]}, {"question": "What can you tell from a histogram", "positive_ctxs": [{"text": "A histogram shows bars representing numerical values by range of value. A bar chart shows categories, not numbers, with bars indicating the amount of each category. Histogram example: student's ages, with a bar showing the number of students in each year."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "You are allowed to select k of these n boxes all at once and break them open simultaneously, gaining access to k keys. What is the probability that using these keys you can open all n boxes, where you use a found key to open the box it belongs to and repeat."}, {"text": "Furthermore, some types of signals (very often the case for images) use whole number representations: in these cases, histogram medians can be far more efficient because it is simple to update the histogram from window to window, and finding the median of a histogram is not particularly onerous."}, {"text": "Generate N random numbers from a categorical distribution of size n and probabilities pi for i= 1= to n. These tell you which of the Fi each of the N values will come from. Denote by mi the quantity of random numbers assigned to the ith category."}, {"text": "Generate N random numbers from a categorical distribution of size n and probabilities pi for i= 1= to n. These tell you which of the Fi each of the N values will come from. Denote by mi the quantity of random numbers assigned to the ith category."}, {"text": "Generate N random numbers from a categorical distribution of size n and probabilities pi for i= 1= to n. These tell you which of the Fi each of the N values will come from. Denote by mi the quantity of random numbers assigned to the ith category."}, {"text": "Therefore, if you see trousers, the most you can deduce is that you are looking at a single sample from a subset of students where 25% are girls. And by definition, chance of this random student being a girl is 25%. Every Bayes theorem problem can be solved in this way ."}]}, {"question": "Can a discrete variable take any fractional value", "positive_ctxs": [{"text": "Discrete random variables can only take on values from a countable set of numbers such as the integers or some subset of integers. (Usually, they can't be fractions.)"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In mathematics, a variable may be continuous or discrete. If it can take on two particular real values such that it can also take on all real values between them (even values that are arbitrarily close together), the variable is continuous in that interval. If it can take on a value such that there is a non-infinitesimal gap on each side of it containing no values that the variable can take on, then it is discrete around that value."}, {"text": "For example, a variable over a non-empty range of the real numbers is continuous, if it can take on any value in that range. The reason is that any range of real numbers between"}, {"text": "In contrast, a discrete variable over a particular range of real values is one for which, for any value in the range that the variable is permitted to take on, there is a positive minimum distance to the nearest other permissible value. The number of permitted values is either finite or countably infinite. Common examples are variables that must be integers, non-negative integers, positive integers, or only the integers 0 and 1."}, {"text": "is countable, the random variable is called a discrete random variable and its distribution is a discrete probability distribution, i.e. can be described by a probability mass function that assigns a probability to each value in the image of"}, {"text": "is countable, the random variable is called a discrete random variable and its distribution is a discrete probability distribution, i.e. can be described by a probability mass function that assigns a probability to each value in the image of"}, {"text": "Suppose that the probability that a pedestrian will be hit by a car, while crossing the road at a pedestrian crossing, without paying attention to the traffic light, is to be computed. Let H be a discrete random variable taking one value from {Hit, Not Hit}. Let L (for traffic light) be a discrete random variable taking one value from {Red, Yellow, Green}."}, {"text": "Suppose that the probability that a pedestrian will be hit by a car, while crossing the road at a pedestrian crossing, without paying attention to the traffic light, is to be computed. Let H be a discrete random variable taking one value from {Hit, Not Hit}. Let L (for traffic light) be a discrete random variable taking one value from {Red, Yellow, Green}."}]}, {"question": "Which activation function is used for binary classification", "positive_ctxs": [{"text": "Softmax Thus sigmoid is widely used for binary classification problems. While building a network for a multiclass problem, the output layer would have as many neurons as the number of classes in the target. For instance if you have three classes, there would be three neurons in the output layer."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A true perceptron performs binary classification, an MLP neuron is free to either perform classification or regression, depending upon its activation function."}, {"text": "A true perceptron performs binary classification, an MLP neuron is free to either perform classification or regression, depending upon its activation function."}, {"text": "A true perceptron performs binary classification, an MLP neuron is free to either perform classification or regression, depending upon its activation function."}, {"text": "Folding activation functions are extensively used in the pooling layers in convolutional neural networks, and in output layers of multiclass classification networks. These activations perform aggregation over the inputs, such as taking the mean, minimum or maximum. In multiclass classification the softmax activation is often used."}, {"text": "is non-linear and differentiable (even if the ReLU is not in one point). A historically used activation function is the logistic function:"}, {"text": "is non-linear and differentiable (even if the ReLU is not in one point). A historically used activation function is the logistic function:"}, {"text": "is non-linear and differentiable (even if the ReLU is not in one point). A historically used activation function is the logistic function:"}]}, {"question": "Whatt are best image processing ideas", "positive_ctxs": [{"text": "Best Image Processing Projects CollectionLicense plate recognition.Face Emotion recognition.Face recognition.Cancer detection.Object detection.Pedestrian detection.Lane detection for ADAS.Blind assistance systems.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "One of the most common algorithms is the \"moving average\", often used to try to capture important trends in repeated statistical surveys. In image processing and computer vision, smoothing ideas are used in scale space representations. The simplest smoothing algorithm is the \"rectangular\" or \"unweighted sliding-average smooth\"."}, {"text": "Since these TDNNs operated on spectrograms, the resulting phoneme recognition system was invariant to both shifts in time and in frequency. This inspired translation invariance in image processing with CNNs. The tiling of neuron outputs can cover timed stages.TDNNs now achieve the best performance in far distance speech recognition."}, {"text": "Since these TDNNs operated on spectrograms, the resulting phoneme recognition system was invariant to both shifts in time and in frequency. This inspired translation invariance in image processing with CNNs. The tiling of neuron outputs can cover timed stages.TDNNs now achieve the best performance in far distance speech recognition."}, {"text": "Since these TDNNs operated on spectrograms, the resulting phoneme recognition system was invariant to both shifts in time and in frequency. This inspired translation invariance in image processing with CNNs. The tiling of neuron outputs can cover timed stages.TDNNs now achieve the best performance in far distance speech recognition."}, {"text": "Since these TDNNs operated on spectrograms, the resulting phoneme recognition system was invariant to both shifts in time and in frequency. This inspired translation invariance in image processing with CNNs. The tiling of neuron outputs can cover timed stages.TDNNs now achieve the best performance in far distance speech recognition."}, {"text": "Since these TDNNs operated on spectrograms, the resulting phoneme recognition system was invariant to both shifts in time and in frequency. This inspired translation invariance in image processing with CNNs. The tiling of neuron outputs can cover timed stages.TDNNs now achieve the best performance in far distance speech recognition."}, {"text": "Since these TDNNs operated on spectrograms, the resulting phoneme recognition system was invariant to both shifts in time and in frequency. This inspired translation invariance in image processing with CNNs. The tiling of neuron outputs can cover timed stages.TDNNs now achieve the best performance in far distance speech recognition."}]}, {"question": "What is an instance in machine learning", "positive_ctxs": [{"text": "A single object of the world from which a model will be learned, or on which a model will be used (e.g., for prediction). In most machine learning work, instances are described by feature vectors; some work uses more complex representations (e.g., containing relations between instances or between parts of instances)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Problem of multi-instance learning is not unique to drug finding. In 1998, Maron and Ratan found another application of multiple instance learning to scene classification in machine vision, and devised Diverse Density framework. Given an image, an instance is taken to be one or more fixed-size subimages, and the bag of instances is taken to be the entire image."}, {"text": "This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems."}, {"text": "This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems."}, {"text": "This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems."}, {"text": "This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems."}, {"text": "This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems."}, {"text": "This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems."}]}, {"question": "How do you find the sample size when given the mean and standard deviation", "positive_ctxs": [{"text": "First multiply the critical value by the standard deviation. Then divide this result by the error from Step 1. Now square this result. This result is the sample size."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "In scientific and technical literature, experimental data are often summarized either using the mean and standard deviation of the sample data or the mean with the standard error. This often leads to confusion about their interchangeability. However, the mean and standard deviation are descriptive statistics, whereas the standard error of the mean is descriptive of the random sampling process."}]}, {"question": "What does the coefficient of determination tell you", "positive_ctxs": [{"text": "The coefficient of determination is a measurement used to explain how much variability of one factor can be caused by its relationship to another related factor. This correlation, known as the \"goodness of fit,\" is represented as a value between 0.0 and 1.0."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A related effect size is r2, the coefficient of determination (also referred to as R2 or \"r-squared\"), calculated as the square of the Pearson correlation r. In the case of paired data, this is a measure of the proportion of variance shared by the two variables, and varies from 0 to 1. For example, with an r of 0.21 the coefficient of determination is 0.0441, meaning that 4.4% of the variance of either variable is shared with the other variable. The r2 is always positive, so does not convey the direction of the correlation between the two variables."}, {"text": "Similarly, for a regression analysis, an analyst would report the coefficient of determination (R2) and the model equation instead of the model's p-value."}, {"text": "When an intercept is included, then r2 is simply the square of the sample correlation coefficient (i.e., r) between the observed outcomes and the observed predictor values. If additional regressors are included, R2 is the square of the coefficient of multiple correlation. In both such cases, the coefficient of determination normally ranges from 0 to 1."}, {"text": "are results of measurements that contain measurement error, the realistic limits on the correlation coefficient are not \u22121 to +1 but a smaller range. For the case of a linear model with a single independent variable, the coefficient of determination (R squared) is the square of"}, {"text": "are results of measurements that contain measurement error, the realistic limits on the correlation coefficient are not \u22121 to +1 but a smaller range. For the case of a linear model with a single independent variable, the coefficient of determination (R squared) is the square of"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "is the coefficient of determination of a regression of explanator j on all the other explanators. A tolerance of less than 0.20 or 0.10 and/or a VIF of 5 or 10 and above indicates a multicollinearity problem."}]}, {"question": "What is the point of a box plot", "positive_ctxs": [{"text": "Box plots divide the data into sections that each contain approximately 25% of the data in that set. Box plots are useful as they provide a visual summary of the data enabling researchers to quickly identify mean values, the dispersion of the data set, and signs of skewness."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A boxplot is constructed of two parts, a box and a set of whiskers shown in Figure 2. The lowest point is the minimum of the data set and the highest point is the maximum of the data set. The box is drawn from Q1 to Q3 with a horizontal line drawn in the middle to denote the median."}, {"text": "Graphs that are appropriate for bivariate analysis depend on the type of variable. For two continuous variables, a scatterplot is a common graph. When one variable is categorical and the other continuous, a box plot is common and when both are categorical a mosaic plot is common."}, {"text": "Graphs that are appropriate for bivariate analysis depend on the type of variable. For two continuous variables, a scatterplot is a common graph. When one variable is categorical and the other continuous, a box plot is common and when both are categorical a mosaic plot is common."}, {"text": "The axis-aligned minimum bounding box (or AABB) for a given point set is its minimum bounding box subject to the constraint that the edges of the box are parallel to the (Cartesian) coordinate axes. It is the Cartesian product of N intervals each of which is defined by the minimal and maximal value of the corresponding coordinate for the points in S."}, {"text": "The arbitrarily oriented minimum bounding box is the minimum bounding box, calculated subject to no constraints as to the orientation of the result. Minimum bounding box algorithms based on the rotating calipers method can be used to find the minimum-area or minimum-perimeter bounding box of a two-dimensional convex polygon in linear time, and of a two-dimensional point set in the time it takes to construct its convex hull followed by a linear-time computation. A three-dimensional rotating calipers algorithm can find the minimum-volume arbitrarily-oriented bounding box of a three-dimensional point set in cubic time."}, {"text": "In geometry, the minimum or smallest bounding or enclosing box for a point set (S) in N dimensions is the box with the smallest measure (area, volume, or hypervolume in higher dimensions) within which all the points lie. When other kinds of measure are used, the minimum box is usually called accordingly, e.g., \"minimum-perimeter bounding box\"."}, {"text": "Variable width box plots illustrate the size of each group whose data is being plotted by making the width of the box proportional to the size of the group. A popular convention is to make the box width proportional to the square root of the size of the group.Notched box plots apply a \"notch\" or narrowing of the box around the median. Notches are useful in offering a rough guide to significance of difference of medians; if the notches of two boxes do not overlap, this offers evidence of a statistically significant difference between the medians."}]}, {"question": "Data Science Can machine learning be used for time series analysis", "positive_ctxs": [{"text": "The general idea is that machine learning, while not always the perfect choice, can be powerful in modeling time series data due to its ability to handle non-linear data. The feature engineering applied to the time series data in a machine learning approach is the key to how successful the model will be."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In the context of statistics, econometrics, quantitative finance, seismology, meteorology, and geophysics the primary goal of time series analysis is forecasting. In the context of signal processing, control engineering and communication engineering it is used for signal detection. Other application are in data mining, pattern recognition and machine learning, where time series analysis can be used for clustering, classification, query by content, anomaly detection as well as forecasting."}, {"text": "In the context of statistics, econometrics, quantitative finance, seismology, meteorology, and geophysics the primary goal of time series analysis is forecasting. In the context of signal processing, control engineering and communication engineering it is used for signal detection. Other application are in data mining, pattern recognition and machine learning, where time series analysis can be used for clustering, classification, query by content, anomaly detection as well as forecasting."}, {"text": "In the context of statistics, econometrics, quantitative finance, seismology, meteorology, and geophysics the primary goal of time series analysis is forecasting. In the context of signal processing, control engineering and communication engineering it is used for signal detection. Other application are in data mining, pattern recognition and machine learning, where time series analysis can be used for clustering, classification, query by content, anomaly detection as well as forecasting."}, {"text": "Interrupted time series analysis is used to detect changes in the evolution of a time series from before to after some intervention which may affect the underlying variable."}, {"text": "Interrupted time series analysis is used to detect changes in the evolution of a time series from before to after some intervention which may affect the underlying variable."}, {"text": "Interrupted time series analysis is used to detect changes in the evolution of a time series from before to after some intervention which may affect the underlying variable."}, {"text": "Within the workshops, data scientists use tools like Data Science Experience (DSX) to collaborate and find similar solutions to their use cases. The machine learning experts have completed cases in the travel, energy and utilities, healthcare, financial services, manufacturing, and retail industries. Together, they walk through the stages of the machine learning process to get the concrete results."}]}, {"question": "What is the use of bivariate analysis", "positive_ctxs": [{"text": "Bivariate analysis is one of the simplest forms of quantitative (statistical) analysis. It involves the analysis of two variables (often denoted as X, Y), for the purpose of determining the empirical relationship between them. Bivariate analysis can be helpful in testing simple hypotheses of association."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Like univariate analysis, bivariate analysis can be descriptive or inferential. It is the analysis of the relationship between the two variables. Bivariate analysis is a simple (two variable) special case of multivariate analysis (where multiple relations between multiple variables are examined simultaneously)."}, {"text": "Like univariate analysis, bivariate analysis can be descriptive or inferential. It is the analysis of the relationship between the two variables. Bivariate analysis is a simple (two variable) special case of multivariate analysis (where multiple relations between multiple variables are examined simultaneously)."}, {"text": "Canonical correlation analysis finds linear relationships among two sets of variables; it is the generalised (i.e. canonical) version of bivariate correlation."}, {"text": "Canonical correlation analysis finds linear relationships among two sets of variables; it is the generalised (i.e. canonical) version of bivariate correlation."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "The sample correlation coefficient r is not an unbiased estimate of \u03c1. For data that follows a bivariate normal distribution, the expectation E[r] for the sample correlation coefficient r of a normal bivariate is"}]}, {"question": "What is a model in regression analysis", "positive_ctxs": [{"text": "Model specification refers to the determination of which independent variables should be included in or excluded from a regression equation. A multiple regression model is, in fact, a theoretical statement about the causal relationship between one or more independent variables and a dependent variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Nonparametric regression is a category of regression analysis in which the predictor does not take a predetermined form but is constructed according to information derived from the data. That is, no parametric form is assumed for the relationship between predictors and dependent variable. Nonparametric regression requires larger sample sizes than regression based on parametric models because the data must supply the model structure as well as the model estimates."}, {"text": "In statistics, Bayesian linear regression is an approach to linear regression in which the statistical analysis is undertaken within the context of Bayesian inference. When the regression model has errors that have a normal distribution, and if a particular form of prior distribution is assumed, explicit results are available for the posterior probability distributions of the model's parameters."}, {"text": "In statistics, Poisson regression is a generalized linear model form of regression analysis used to model count data and contingency tables. Poisson regression assumes the response variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. A Poisson regression model is sometimes known as a log-linear model, especially when used to model contingency tables."}, {"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}, {"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}, {"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}, {"text": "Also, one should not follow up an exploratory analysis with a confirmatory analysis in the same dataset. An exploratory analysis is used to find ideas for a theory, but not to test that theory as well. When a model is found exploratory in a dataset, then following up that analysis with a confirmatory analysis in the same dataset could simply mean that the results of the confirmatory analysis are due to the same type 1 error that resulted in the exploratory model in the first place."}]}, {"question": "How do you find the class interval in a frequency table", "positive_ctxs": [{"text": "The steps in grouping may be summarized as follows:Decide on the number of classes.Determine the range, i.e., the difference between the highest and lowest observations in the data.Divide range by the number of classes to estimate approximate size of the interval (h).More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, a frequency distribution is a list, table or graph that displays the frequency of various outcomes in a sample. Each entry in the table contains the frequency or count of the occurrences of values within a particular group or interval."}, {"text": "In statistics, a frequency distribution is a list, table or graph that displays the frequency of various outcomes in a sample. Each entry in the table contains the frequency or count of the occurrences of values within a particular group or interval."}, {"text": "A frequency distribution table is an arrangement of the values that one or more variables take in a sample. Each entry in the table contains the frequency or count of the occurrences of values within a particular group or interval, and in this way, the table summarizes the distribution of values in the sample. An example is shown below"}, {"text": "A frequency distribution table is an arrangement of the values that one or more variables take in a sample. Each entry in the table contains the frequency or count of the occurrences of values within a particular group or interval, and in this way, the table summarizes the distribution of values in the sample. An example is shown below"}, {"text": "In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the (multivariate) frequency distribution of the variables. They are heavily used in survey research, business intelligence, engineering, and scientific research. They provide a basic picture of the interrelation between two variables and can help find interactions between them."}, {"text": "In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the (multivariate) frequency distribution of the variables. They are heavily used in survey research, business intelligence, engineering, and scientific research. They provide a basic picture of the interrelation between two variables and can help find interactions between them."}, {"text": "In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the (multivariate) frequency distribution of the variables. They are heavily used in survey research, business intelligence, engineering, and scientific research. They provide a basic picture of the interrelation between two variables and can help find interactions between them."}]}, {"question": "What is joint distribution in statistics", "positive_ctxs": [{"text": "A joint probability distribution shows a probability distribution for two (or more) random variables. Instead of events being labeled A and B, the norm is to use X and Y. The formal definition is: f(x,y) = P(X = x, Y = y) The whole point of the joint distribution is to look for a relationship between two variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy channel in the limit of long block lengths, when the channel statistics are determined by the joint distribution."}, {"text": "We assume that the source is producing independent symbols, with possibly different output statistics at each instant. We assume that the statistics of the process are known completely, that is, the marginal distribution of the process seen at each time instant is known. The joint distribution is just the product of marginals."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}]}, {"question": "Why is the word deep used in deep learning", "positive_ctxs": [{"text": "The word \"deep\" in \"deep learning\" refers to the number of layers through which the data is transformed. Deep models (CAP > 2) are able to extract better features than shallow models and hence, extra layers help in learning the features effectively."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Neural networks have been used for implementing language models since the early 2000s. LSTM helped to improve machine translation and language modeling.Other key techniques in this field are negative sampling and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space."}, {"text": "Neural networks have been used for implementing language models since the early 2000s. LSTM helped to improve machine translation and language modeling.Other key techniques in this field are negative sampling and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space."}, {"text": "Neural networks have been used for implementing language models since the early 2000s. LSTM helped to improve machine translation and language modeling.Other key techniques in this field are negative sampling and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space."}, {"text": "Neural networks have been used for implementing language models since the early 2000s. LSTM helped to improve machine translation and language modeling.Other key techniques in this field are negative sampling and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space."}, {"text": "Neural networks have been used for implementing language models since the early 2000s. LSTM helped to improve machine translation and language modeling.Other key techniques in this field are negative sampling and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space."}, {"text": "Neural networks have been used for implementing language models since the early 2000s. LSTM helped to improve machine translation and language modeling.Other key techniques in this field are negative sampling and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space."}, {"text": "Neural networks have been used for implementing language models since the early 2000s. LSTM helped to improve machine translation and language modeling.Other key techniques in this field are negative sampling and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space."}]}, {"question": "How does change detection work in angular", "positive_ctxs": [{"text": "How Change Detection WorksDeveloper updates the data model, e.g. by updating a component binding.Angular detects the change.Change detection checks every component in the component tree from top to bottom to see if the corresponding model has changed.If there is a new value, it will update the component's view (DOM)"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "is negative, then the process favors changes in sign between terms of the process. This can be likened to edge detection or detection of change in direction."}, {"text": "is negative, then the process favors changes in sign between terms of the process. This can be likened to edge detection or detection of change in direction."}, {"text": "How much does the ball cost?\" many subjects incorrectly answer $0.10. An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do."}, {"text": "Specific applications, like step detection and edge detection, may be concerned with changes in the mean, variance, correlation, or spectral density of the process. More generally change detection also includes the detection of anomalous behavior: anomaly detection."}, {"text": "How we measure the response affects what inferences we draw. Suppose that we measure changes in blood pressure as a percentage change rather than in absolute values. Then, depending in the exact numbers, the average causal effect might be an increase in blood pressure."}, {"text": "is available and the goal is to identify whether any change point(s) occurred in the series. This is an example of post hoc analysis and is often approached using hypothesis testing methods. By contrast, online change point detection is concerned with detecting change points in an incoming data stream."}, {"text": "The problem of change point detection can be narrowed down further into more specific problems. In offline change point detection it is assumed that a sequence of length"}]}, {"question": "Whats the difference between Multilayer Perceptron and Restricted Boltzmann Machine", "positive_ctxs": [{"text": "MLP usually means many layers and can be supervised with labels. RBM (Restricted Boltzmann Machine) consists of only 2 layers: input layer & hidden layer, and it is un-supervised (no labels). RBM (Restricted Boltzmann Machine) consists of only 2 layers: input layer & hidden layer, and it is un-supervised (no labels)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The visible units of Restricted Boltzmann Machine can be multinomial, although the hidden units are Bernoulli. In this case, the logistic function for visible units is replaced by the softmax function"}, {"text": "Restricted Boltzmann machines (RBMs) are often used as a building block for multilayer learning architectures. An RBM can be represented by an undirected bipartite graph consisting of a group of binary hidden variables, a group of visible variables, and edges connecting the hidden and visible nodes. It is a special case of the more general Boltzmann machines with the constraint of no intra-node connections."}, {"text": "Restricted Boltzmann machines (RBMs) are often used as a building block for multilayer learning architectures. An RBM can be represented by an undirected bipartite graph consisting of a group of binary hidden variables, a group of visible variables, and edges connecting the hidden and visible nodes. It is a special case of the more general Boltzmann machines with the constraint of no intra-node connections."}, {"text": "Restricted Boltzmann machines (RBMs) are often used as a building block for multilayer learning architectures. An RBM can be represented by an undirected bipartite graph consisting of a group of binary hidden variables, a group of visible variables, and edges connecting the hidden and visible nodes. It is a special case of the more general Boltzmann machines with the constraint of no intra-node connections."}, {"text": "Unsupervised learning schemes for training spatio-temporal features have been introduced, based on Convolutional Gated Restricted Boltzmann Machines and Independent Subspace Analysis."}, {"text": "Unsupervised learning schemes for training spatio-temporal features have been introduced, based on Convolutional Gated Restricted Boltzmann Machines and Independent Subspace Analysis."}, {"text": "Unsupervised learning schemes for training spatio-temporal features have been introduced, based on Convolutional Gated Restricted Boltzmann Machines and Independent Subspace Analysis."}]}, {"question": "What is mean in Poisson distribution", "positive_ctxs": [{"text": "Poisson Formula. P(x; \u03bc) = (e-\u03bc) (\u03bcx) / x! where x is the actual number of successes that result from the experiment, and e is approximately equal to 2.71828. The Poisson distribution has the following properties: The mean of the distribution is equal to \u03bc . The variance is also equal to \u03bc ."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The confidence interval for the mean of a Poisson distribution can be expressed using the relationship between the cumulative distribution functions of the Poisson and chi-squared distributions. The chi-squared distribution is itself closely related to the gamma distribution, and this leads to an alternative expression. Given an observation k from a Poisson distribution with mean \u03bc, a confidence interval for \u03bc with confidence level 1 \u2013 \u03b1 is"}, {"text": "To display the intuition behind this statement, consider two independent Poisson processes, \u201cSuccess\u201d and \u201cFailure\u201d, with intensities p and 1 \u2212 p. Together, the Success and Failure processes are equivalent to a single Poisson process of intensity 1, where an occurrence of the process is a success if a corresponding independent coin toss comes up heads with probability p; otherwise, it is a failure. If r is a counting number, the coin tosses show that the count of successes before the rth failure follows a negative binomial distribution with parameters r and p. The count is also, however, the count of the Success Poisson process at the random time T of the rth occurrence in the Failure Poisson process. The Success count follows a Poisson distribution with mean pT, where T is the waiting time for r occurrences in a Poisson process of intensity 1 \u2212 p, i.e., T is gamma-distributed with shape parameter r and intensity 1 \u2212 p. Thus, the negative binomial distribution is equivalent to a Poisson distribution with mean pT, where the random variate T is gamma-distributed with shape parameter r and intensity (1 \u2212 p)/p."}, {"text": "To display the intuition behind this statement, consider two independent Poisson processes, \u201cSuccess\u201d and \u201cFailure\u201d, with intensities p and 1 \u2212 p. Together, the Success and Failure processes are equivalent to a single Poisson process of intensity 1, where an occurrence of the process is a success if a corresponding independent coin toss comes up heads with probability p; otherwise, it is a failure. If r is a counting number, the coin tosses show that the count of successes before the rth failure follows a negative binomial distribution with parameters r and p. The count is also, however, the count of the Success Poisson process at the random time T of the rth occurrence in the Failure Poisson process. The Success count follows a Poisson distribution with mean pT, where T is the waiting time for r occurrences in a Poisson process of intensity 1 \u2212 p, i.e., T is gamma-distributed with shape parameter r and intensity 1 \u2212 p. Thus, the negative binomial distribution is equivalent to a Poisson distribution with mean pT, where the random variate T is gamma-distributed with shape parameter r and intensity (1 \u2212 p)/p."}, {"text": "The Poisson distribution is a special case of the discrete compound Poisson distribution (or stuttering Poisson distribution) with only a parameter. The discrete compound Poisson distribution can be deduced from the limiting distribution of univariate multinomial distribution. It is also a special case of a compound Poisson distribution."}, {"text": "The word law is sometimes used as a synonym of probability distribution, and convergence in law means convergence in distribution. Accordingly, the Poisson distribution is sometimes called the \"law of small numbers\" because it is the probability distribution of the number of occurrences of an event that happens rarely but has very many opportunities to happen. The Law of Small Numbers is a book by Ladislaus Bortkiewicz about the Poisson distribution, published in 1898."}, {"text": "In probability theory and statistics, the Poisson distribution (; French pronunciation: \u200b[pwas\u0254\u0303]), named after French mathematician Sim\u00e9on Denis Poisson, is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant mean rate and independently of the time since the last event. The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume."}, {"text": "The probability distribution of the number of fixed points in a uniformly distributed random permutation approaches a Poisson distribution with expected value 1 as n grows. In particular, it is an elegant application of the inclusion\u2013exclusion principle to show that the probability that there are no fixed points approaches 1/e. When n is big enough, the probability distribution of fixed points is almost the Poisson distribution with expected value 1."}]}, {"question": "What are some applications of vectors in real life", "positive_ctxs": [{"text": "Vectors have many real-life applications, including situations involving force or velocity. For example, consider the forces acting on a boat crossing a river. The boat's motor generates a force in one direction, and the current of the river generates a force in another direction. Both forces are vectors."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Engaged: real life tasks are reflected in the activities conducted for learning.Active learning requires appropriate learning environments through the implementation of correct strategy. Characteristics of learning environment are:"}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "An extension of word vectors for n-grams in biological sequences (e.g. DNA, RNA, and Proteins) for bioinformatics applications have been proposed by Asgari and Mofrad. Named bio-vectors (BioVec) to refer to biological sequences in general with protein-vectors (ProtVec) for proteins (amino-acid sequences) and gene-vectors (GeneVec) for gene sequences, this representation can be widely used in applications of machine learning in proteomics and genomics."}, {"text": "An extension of word vectors for n-grams in biological sequences (e.g. DNA, RNA, and Proteins) for bioinformatics applications have been proposed by Asgari and Mofrad. Named bio-vectors (BioVec) to refer to biological sequences in general with protein-vectors (ProtVec) for proteins (amino-acid sequences) and gene-vectors (GeneVec) for gene sequences, this representation can be widely used in applications of machine learning in proteomics and genomics."}, {"text": "An extension of word vectors for n-grams in biological sequences (e.g. DNA, RNA, and Proteins) for bioinformatics applications have been proposed by Asgari and Mofrad. Named bio-vectors (BioVec) to refer to biological sequences in general with protein-vectors (ProtVec) for proteins (amino-acid sequences) and gene-vectors (GeneVec) for gene sequences, this representation can be widely used in applications of machine learning in proteomics and genomics."}, {"text": "An extension of word vectors for n-grams in biological sequences (e.g. DNA, RNA, and Proteins) for bioinformatics applications have been proposed by Asgari and Mofrad. Named bio-vectors (BioVec) to refer to biological sequences in general with protein-vectors (ProtVec) for proteins (amino-acid sequences) and gene-vectors (GeneVec) for gene sequences, this representation can be widely used in applications of machine learning in proteomics and genomics."}, {"text": "An extension of word vectors for n-grams in biological sequences (e.g. DNA, RNA, and Proteins) for bioinformatics applications have been proposed by Asgari and Mofrad. Named bio-vectors (BioVec) to refer to biological sequences in general with protein-vectors (ProtVec) for proteins (amino-acid sequences) and gene-vectors (GeneVec) for gene sequences, this representation can be widely used in applications of machine learning in proteomics and genomics."}]}, {"question": "Why is data important in AI", "positive_ctxs": [{"text": "Data quality is important when applying Artificial Intelligence techniques, because the results of these solutions will be as good or bad as the quality of the data used. The algorithms that feed systems based on Artificial Intelligence can only assume that the data to be analyzed are reliable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "\"Marvin Minsky writes \"This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence? \"Nick Bostrom observes that \"A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore.\""}, {"text": "Dietterich and Horvitz echo the \"Sorcerer's Apprentice\" concern in a Communications of the ACM editorial, emphasizing the need for AI systems that can fluidly and unambiguously solicit human input as needed.The first of Russell's two concerns above is that autonomous AI systems may be assigned the wrong goals by accident. Dietterich and Horvitz note that this is already a concern for existing systems: \"An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally.\" This concern becomes more serious as AI software advances in autonomy and flexibility."}, {"text": "It is standard practice in physics to perform blinded data analysis. After data analysis is complete, one is allowed to unblind the data or \"open the box\". It is important to agree beforehand to publish the data regardless of the results of the analysis to prevent publication bias."}, {"text": "It is standard practice in physics to perform blinded data analysis. After data analysis is complete, one is allowed to unblind the data or \"open the box\". It is important to agree beforehand to publish the data regardless of the results of the analysis to prevent publication bias."}, {"text": "Daily chart \u2013 Unlikely results - Why most published scientific research is probably false \u2013 Illustration of False positives and false negatives in The Economist appearing in the article Problems with scientific research How science goes wrong Scientific research has changed the world. Now it needs to change itself (19 October 2013)"}, {"text": "Daily chart \u2013 Unlikely results - Why most published scientific research is probably false \u2013 Illustration of False positives and false negatives in The Economist appearing in the article Problems with scientific research How science goes wrong Scientific research has changed the world. Now it needs to change itself (19 October 2013)"}, {"text": "Daily chart \u2013 Unlikely results - Why most published scientific research is probably false \u2013 Illustration of False positives and false negatives in The Economist appearing in the article Problems with scientific research How science goes wrong Scientific research has changed the world. Now it needs to change itself (19 October 2013)"}]}, {"question": "What is Dimension machine learning", "positive_ctxs": [{"text": "The number of input variables or features for a dataset is referred to as its dimensionality. Large numbers of input features can cause poor performance for machine learning algorithms. Dimensionality reduction is a general field of study concerned with reducing the number of input features."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "VC Dimension uses the principles of measure theory and finds the maximum capacity under the best possible circumstances. This is, given input data in a specific form. As noted in, the VC Dimension for arbitrary inputs is half the information capacity of a Perceptron."}, {"text": "VC Dimension uses the principles of measure theory and finds the maximum capacity under the best possible circumstances. This is, given input data in a specific form. As noted in, the VC Dimension for arbitrary inputs is half the information capacity of a Perceptron."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "What is the real world in the Matrix", "positive_ctxs": [{"text": "The Real World is a term by the redpills to refer to reality, the true physical world and life outside the Matrix."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What goes on in the world \u2013 the system's \"subjective\" experience \u2013 is represented internally by a sequence of patterns in the focus. The memory stores this sequence and can recreate it later in the focus if addressed with a pattern similar to one encountered in the past. Thus, the memory learns to predict what is about to happen."}, {"text": "In the first possible world P(a) is true, in the second P(a) is false, and in the third possible world there is no a in the domain at all."}, {"text": "At the same time, latent variables link observable (\"sub-symbolic\") data in the real world to symbolic data in the modeled world."}, {"text": "At the same time, latent variables link observable (\"sub-symbolic\") data in the real world to symbolic data in the modeled world."}, {"text": "At the same time, latent variables link observable (\"sub-symbolic\") data in the real world to symbolic data in the modeled world."}, {"text": "On the other hand, in VR the surrounding environment is completely virtual. A demonstration of how AR layers objects onto the real world can be seen with augmented reality games. WallaMe is an augmented reality game application that allows users to hide messages in real environments, utilizing geolocation technology in order to enable users to hide messages wherever they may wish in the world."}, {"text": "that he has always been dreaming, in which case the objects he perceives actually exist, albeit in his imagination.Both the dream argument and the simulation hypothesis can be regarded as skeptical hypotheses; however in raising these doubts, just as Descartes noted that his own thinking led him to be convinced of his own existence, the existence of the argument itself is testament to the possibility of its own truth. Another state of mind in which some argue an individual's perceptions have no physical basis in the real world is called psychosis though psychosis may have a physical basis in the real world and explanations vary."}]}, {"question": "What is a weight in machine learning", "positive_ctxs": [{"text": "Weights and biases (commonly referred to as w and b) are the learnable parameters of a machine learning model. When the inputs are transmitted between neurons, the weights are applied to the inputs along with the bias. A neuron. Weights control the signal (or the strength of the connection) between two neurons."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "How do you find the population standard deviation", "positive_ctxs": [{"text": "First, let's review how to calculate the population standard deviation:Calculate the mean (simple average of the numbers).For each number: Subtract the mean. Square the result.Calculate the mean of those squared differences. Take the square root of that to obtain the population standard deviation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "One can find the standard deviation of an entire population in cases (such as standardized testing) where every member of a population is sampled. In cases where that cannot be done, the standard deviation \u03c3 is estimated by examining a random sample taken from the population and computing a statistic of the sample, which is used as an estimate of the population standard deviation. Such a statistic is called an estimator, and the estimator (or the value of the estimator, namely the estimate) is called a sample standard deviation, and is denoted by s (possibly with modifiers)."}, {"text": "One can find the standard deviation of an entire population in cases (such as standardized testing) where every member of a population is sampled. In cases where that cannot be done, the standard deviation \u03c3 is estimated by examining a random sample taken from the population and computing a statistic of the sample, which is used as an estimate of the population standard deviation. Such a statistic is called an estimator, and the estimator (or the value of the estimator, namely the estimate) is called a sample standard deviation, and is denoted by s (possibly with modifiers)."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of a population or sample and the standard error of a statistic (e.g., of the sample mean) are quite different, but related. The sample mean's standard error is the standard deviation of the set of means that would be found by drawing an infinite number of repeated samples from the population and computing a mean for each sample. The mean's standard error turns out to equal the population standard deviation divided by the square root of the sample size, and is estimated by using the sample standard deviation divided by the square root of the sample size."}]}, {"question": "What are the steps in convolution neural network", "positive_ctxs": [{"text": "A Convolutional Neural Networks Introduction so to speak.Step 1: Convolution Operation. Step 1(b): ReLU Layer. Step 2: Pooling. Step 3: Flattening. Step 4: Full Connection. Step 1 - Convolution Operation. Step 1(b): The Rectified Linear Unit (ReLU) Step 2 - Max Pooling.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction\u2014forward\u2014from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network."}, {"text": "The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction\u2014forward\u2014from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network."}, {"text": "The delta rule is derived by attempting to minimize the error in the output of the neural network through gradient descent. The error for a neural network with"}, {"text": "The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work."}, {"text": "The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work."}, {"text": "(See the page on Perceptrons (book) for more information.) Nevertheless, the often-miscited Minsky/Papert text caused a significant decline in interest and funding of neural network research. It took ten more years until neural network research experienced a resurgence in the 1980s."}, {"text": "(See the page on Perceptrons (book) for more information.) Nevertheless, the often-miscited Minsky/Papert text caused a significant decline in interest and funding of neural network research. It took ten more years until neural network research experienced a resurgence in the 1980s."}]}, {"question": "What is PAC in machine learning", "positive_ctxs": [{"text": "In computational learning theory, probably approximately correct (PAC) learning is a framework for mathematical analysis of machine learning."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is not common to use PAC in a dedicated vessel, due to the high head loss that would occur. Instead, PAC is generally added directly to other process units, such as raw water intakes, rapid mix basins, clarifiers, and gravity filters."}, {"text": "PAC material is finer material. PAC is made up of crushed or ground carbon particles, 95\u2013100% of which will pass through a designated mesh sieve. The ASTM classifies particles passing through an 80-mesh sieve (0.177 mm) and smaller as PAC."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "What is the limitation of the F ratio in Anova", "positive_ctxs": [{"text": "The disadvantage of the ANOVA F-test is that if we reject the null hypothesis, we do not know which treatments can be said to be significantly different from the others, nor, if the F-test is performed at level \u03b1, can we state that the treatment pair with the greatest mean difference is significantly different at level"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This is the most analogous index to the squared multiple correlations in linear regression. It represents the proportional reduction in the deviance wherein the deviance is treated as a measure of variation analogous but not identical to the variance in linear regression analysis. One limitation of the likelihood ratio R\u00b2 is that it is not monotonically related to the odds ratio, meaning that it does not necessarily increase as the odds ratio increases and does not necessarily decrease as the odds ratio decreases."}, {"text": "This is the most analogous index to the squared multiple correlations in linear regression. It represents the proportional reduction in the deviance wherein the deviance is treated as a measure of variation analogous but not identical to the variance in linear regression analysis. One limitation of the likelihood ratio R\u00b2 is that it is not monotonically related to the odds ratio, meaning that it does not necessarily increase as the odds ratio increases and does not necessarily decrease as the odds ratio decreases."}, {"text": "This is the most analogous index to the squared multiple correlations in linear regression. It represents the proportional reduction in the deviance wherein the deviance is treated as a measure of variation analogous but not identical to the variance in linear regression analysis. One limitation of the likelihood ratio R\u00b2 is that it is not monotonically related to the odds ratio, meaning that it does not necessarily increase as the odds ratio increases and does not necessarily decrease as the odds ratio decreases."}, {"text": "The simplest measure of association for a 2 \u00d7 2 contingency table is the odds ratio. Given two events, A and B, the odds ratio is defined as the ratio of the odds of A in the presence of B and the odds of A in the absence of B, or equivalently (due to symmetry), the ratio of the odds of B in the presence of A and the odds of B in the absence of A. Two events are independent if and only if the odds ratio is 1; if the odds ratio is greater than 1, the events are positively associated; if the odds ratio is less than 1, the events are negatively associated."}, {"text": "The simplest measure of association for a 2 \u00d7 2 contingency table is the odds ratio. Given two events, A and B, the odds ratio is defined as the ratio of the odds of A in the presence of B and the odds of A in the absence of B, or equivalently (due to symmetry), the ratio of the odds of B in the presence of A and the odds of B in the absence of A. Two events are independent if and only if the odds ratio is 1; if the odds ratio is greater than 1, the events are positively associated; if the odds ratio is less than 1, the events are negatively associated."}, {"text": "The simplest measure of association for a 2 \u00d7 2 contingency table is the odds ratio. Given two events, A and B, the odds ratio is defined as the ratio of the odds of A in the presence of B and the odds of A in the absence of B, or equivalently (due to symmetry), the ratio of the odds of B in the presence of A and the odds of B in the absence of A. Two events are independent if and only if the odds ratio is 1; if the odds ratio is greater than 1, the events are positively associated; if the odds ratio is less than 1, the events are negatively associated."}, {"text": "The textbook method is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the degrees of freedom of the numerator and the denominator and the significance level (\u03b1). If F \u2265 FCritical, the null hypothesis is rejected."}]}, {"question": "What is the difference between forward and backward chaining in artificial intelligence", "positive_ctxs": [{"text": "The difference between forward and backward chaining is: Backward chaining starts with a goal and then searches back through inference rules to find the facts that support the goal. Forward chaining starts with facts and searches forward through the rules to find a desired goal."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Because the data determines which rules are selected and used, this method is called data-driven, in contrast to goal-driven backward chaining inference. The forward chaining approach is often employed by expert systems, such as CLIPS."}, {"text": "Inference engines work primarily in one of two modes either special rule or facts: forward chaining and backward chaining. Forward chaining starts with the known facts and asserts new facts. Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved."}, {"text": "Forward chaining (or forward reasoning) is one of the two main methods of reasoning when using an inference engine and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systems, business and production rule systems. The opposite of forward chaining is backward chaining."}, {"text": "Backward chaining (or backward reasoning) is an inference method described colloquially as working backward from the goal. It is used in automated theorem provers, inference engines, proof assistants, and other artificial intelligence applications.In game theory, researchers apply it to (simpler) subgames to find a solution to the game, in a process called backward induction. In chess, it is called retrograde analysis, and it is used to generate table bases for chess endgames for computer chess."}, {"text": "Backward chaining is a bit less straight forward. In backward chaining the system looks at possible conclusions and works backward to see if they might be true. So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true."}, {"text": "Backward chaining is a bit less straight forward. In backward chaining the system looks at possible conclusions and works backward to see if they might be true. So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true."}, {"text": "Because the list of goals determines which rules are selected and used, this method is called goal-driven, in contrast to data-driven forward-chaining inference. The backward chaining approach is often employed by expert systems."}]}, {"question": "How do you do a two sided hypothesis test", "positive_ctxs": [{"text": "Hypothesis Testing \u2014 2-tailed testSpecify the Null(H0) and Alternate(H1) hypothesis.Choose the level of Significance(\u03b1)Find Critical Values.Find the test statistic.Draw your conclusion."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant."}, {"text": "Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant."}, {"text": "Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant."}, {"text": "Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant."}, {"text": "Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant."}]}, {"question": "How does a neural network function", "positive_ctxs": [{"text": "The basic idea behind a neural network is to simulate (copy in a simplified but reasonably faithful way) lots of densely interconnected brain cells inside a computer so you can get it to learn things, recognize patterns, and make decisions in a humanlike way."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "When the activation function is non-linear, then a two-layer neural network can be proven to be a universal function approximator. This is known as the Universal Approximation Theorem. The identity activation function does not satisfy this property."}, {"text": "A network function associated with a neural network characterizes the relationship between input and output layers, which is parameterized by the weights. With appropriately defined network functions, various learning tasks can be performed by minimizing a cost function over the network function (weights)."}, {"text": "A network function associated with a neural network characterizes the relationship between input and output layers, which is parameterized by the weights. With appropriately defined network functions, various learning tasks can be performed by minimizing a cost function over the network function (weights)."}, {"text": "A network function associated with a neural network characterizes the relationship between input and output layers, which is parameterized by the weights. With appropriately defined network functions, various learning tasks can be performed by minimizing a cost function over the network function (weights)."}, {"text": "Matlab: The neural network toolbox has explicit functionality designed to produce a time delay neural network give the step size of time delays and an optional training function. The default training algorithm is a Supervised Learning back-propagation algorithm that updates filter weights based on the Levenberg-Marquardt optimizations. The function is timedelaynet(delays, hidden_layers, train_fnc) and returns a time-delay neural network architecture that a user can train and provide inputs to."}, {"text": "In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control."}, {"text": "The softmax function, also known as softargmax or normalized exponential function, is a generalization of the logistic function to multiple dimensions. It is used in multinomial logistic regression and is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom."}]}, {"question": "Which loss function is used for binary classification", "positive_ctxs": [{"text": "We use binary cross-entropy loss for classification models which output a probability p. The range of the sigmoid function is [0, 1] which makes it suitable for calculating probability."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, the Huber loss is a loss function used in robust regression, that is less sensitive to outliers in data than the squared error loss. A variant for classification is also sometimes used."}, {"text": "Given the binary nature of classification, a natural selection for a loss function (assuming equal cost for false positives and false negatives) would be the 0-1 loss function (0\u20131 indicator function), which takes the value of 0 if the predicted classification equals that of the true class or a 1 if the predicted classification does not match the true class. This selection is modeled by"}, {"text": "The most common loss function for regression is the square loss function (also known as the L2-norm). This familiar loss function is used in Ordinary Least Squares regression."}, {"text": "However, this loss function is non-convex and non-smooth, and solving for the optimal solution is an NP-hard combinatorial optimization problem. As a result, it is better to substitute loss function surrogates which are tractable for commonly used learning algorithms, as they have convenient properties such as being convex and smooth. In addition to their computational tractability, one can show that the solutions to the learning problem using these loss surrogates allow for the recovery of the actual solution to the original classification problem."}, {"text": "In machine learning, the hinge loss is a loss function used for training classifiers. The hinge loss is used for \"maximum-margin\" classification, most notably for support vector machines (SVMs).For an intended output t = \u00b11 and a classifier score y, the hinge loss of the prediction y is defined as"}, {"text": "In machine learning, the hinge loss is a loss function used for training classifiers. The hinge loss is used for \"maximum-margin\" classification, most notably for support vector machines (SVMs).For an intended output t = \u00b11 and a classifier score y, the hinge loss of the prediction y is defined as"}, {"text": "The logistic loss is sometimes called cross-entropy loss. It is also known as log loss (In this case, the binary label is often denoted by {-1,+1}).Remark: The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared error loss for Linear regression."}]}, {"question": "What is the use of uniform distribution", "positive_ctxs": [{"text": "The uniform distribution defines equal probability over a given range for a continuous distribution. For this reason, it is important as a reference distribution. One of the most important applications of the uniform distribution is in the generation of random numbers."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The beta distribution has an important application in the theory of order statistics. A basic result is that the distribution of the kth smallest of a sample of size n from a continuous uniform distribution has a beta distribution. This result is summarized as:"}, {"text": "The differential entropy of the beta distribution is negative for all values of \u03b1 and \u03b2 greater than zero, except at \u03b1 = \u03b2 = 1 (for which values the beta distribution is the same as the uniform distribution), where the differential entropy reaches its maximum value of zero. It is to be expected that the maximum entropy should take place when the beta distribution becomes equal to the uniform distribution, since uncertainty is maximal when all possible events are equiprobable."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "The density of the maximum entropy distribution for this class is constant on each of the intervals [aj-1,aj). The uniform distribution on the finite set {x1,...,xn} (which assigns a probability of 1/n to each of these values) is the maximum entropy distribution among all discrete distributions supported on this set."}, {"text": "The value x = 0.5 is an atom of the distribution of X, thus, the corresponding conditional distribution is well-defined and may be calculated by elementary means (the denominator does not vanish); the conditional distribution of Y given X = 0.5 is uniform on (2/3, 1). Measure theory leads to the same result."}, {"text": "In continuous distributions, unimodality can be defined through the behavior of the cumulative distribution function (cdf). If the cdf is convex for x < m and concave for x > m, then the distribution is unimodal, m being the mode. Note that under this definition the uniform distribution is unimodal, as well as any other distribution in which the maximum distribution is achieved for a range of values, e.g."}, {"text": ", and the underlying random variable is continuous, then the probability distribution of the p-value is uniform on the interval [0,1]. By contrast, if the alternative hypothesis is true, the distribution is dependent on sample size and the true value of the parameter being studied.The distribution of p-values for a group of studies is sometimes called a p-curve. The curve is affected by four factors: the proportion of studies that examined false null hypotheses, the power of the studies that investigated false null hypotheses, the alpha levels, and publication bias."}]}, {"question": "What is the difference between binomial and normal distribution", "positive_ctxs": [{"text": "Normal distribution describes continuous data which have a symmetric distribution, with a characteristic 'bell' shape. Binomial distribution describes the distribution of binary data from a finite sample. Thus it gives the probability of getting r events out of n trials."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The distribution of N thus is the binomial distribution with parameters n and p, where p = 1/2. The mean of the binomial distribution is n/2, and the variance is n/4. This distribution function will be denoted by N(d)."}, {"text": "Under some circumstances, the problem of overdispersion can be solved by using quasi-likelihood estimation or a negative binomial distribution instead.Ver Hoef and Boveng described the difference between quasi-Poisson (also called overdispersion with quasi-likelihood) and negative binomial (equivalent to gamma-Poisson) as follows: If E(Y) = \u03bc, the quasi-Poisson model assumes var(Y) = \u03b8\u03bc while the gamma-Poisson assumes var(Y) = \u03bc(1 + \u03ba\u03bc), where \u03b8 is the quasi-Poisson overdispersion parameter, and \u03ba is the shape parameter of the negative binomial distribution. For both models, parameters are estimated using Iteratively reweighted least squares. For quasi-Poisson, the weights are \u03bc/\u03b8."}, {"text": "Because the square of a standard normal distribution is the chi-square distribution with one degree of freedom, the probability of a result such as 1 heads in 10 trials can be approximated either by using the normal distribution directly, or the chi-square distribution for the normalised, squared difference between observed and expected value. However, many problems involve more than the two possible outcomes of a binomial, and instead require 3 or more categories, which leads to the multinomial distribution. Just as de Moivre and Laplace sought for and found the normal approximation to the binomial, Pearson sought for and found a degenerate multivariate normal approximation to the multinomial distribution (the numbers in each category add up to the total sample size, which is considered fixed)."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "The beta distribution is conjugate to the binomial and Bernoulli distributions in exactly the same way as the Dirichlet distribution is conjugate to the multinomial distribution and categorical distribution."}, {"text": "The Poisson distribution can be derived as a limiting case to the binomial distribution as the number of trials goes to infinity and the expected number of successes remains fixed \u2014 see law of rare events below. Therefore, it can be used as an approximation of the binomial distribution if n is sufficiently large and p is sufficiently small. There is a rule of thumb stating that the Poisson distribution is a good approximation of the binomial distribution if n is at least 20 and p is smaller than or equal to 0.05, and an excellent approximation if n \u2265 100 and np \u2264 10."}]}, {"question": "What are the uses of range in statistics", "positive_ctxs": [{"text": "Given that the range can easily be computed with information on the maximum and minimum value of the data set, users requiring only a rough indication of the data may prefer to use this indicator over more sophisticated measures of spread, like the standard deviation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "While many censuses were conducted in antiquity, there are few population statistics that survive. One example though can be found in the Bible, in chapter 1 of the Book of Numbers. Not only are the statistics given, but the method used to compile those statistics is also described."}]}, {"question": "When should nonparametric statistics be used", "positive_ctxs": [{"text": "Nonparametric tests are also called distribution-free tests because they don't assume that your data follow a specific distribution. You may have heard that you should use nonparametric tests when your data don't meet the assumptions of the parametric test, especially the assumption about normally distributed data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "When a sample consists of more than one variable, descriptive statistics may be used to describe the relationship between pairs of variables. In this case, descriptive statistics include:"}, {"text": "The algorithms then adjust the weights, instead of adjusting the values associated with the individual state-action pairs. Methods based on ideas from nonparametric statistics (which can be seen to construct their own features) have been explored."}, {"text": "The algorithms then adjust the weights, instead of adjusting the values associated with the individual state-action pairs. Methods based on ideas from nonparametric statistics (which can be seen to construct their own features) have been explored."}, {"text": "A Wilcoxon signed-rank test is a nonparametric test that can be used to determine whether two dependent samples were selected from populations having the same distribution."}, {"text": "Thus, the bootstrap is mainly recommended for distribution estimation.\" There is a special consideration with the jackknife, particularly with the delete-1 observation jackknife. It should only be used with smooth, differentiable statistics (e.g., totals, means, proportions, ratios, odd ratios, regression coefficients, etc."}, {"text": "Thus, the bootstrap is mainly recommended for distribution estimation.\" There is a special consideration with the jackknife, particularly with the delete-1 observation jackknife. It should only be used with smooth, differentiable statistics (e.g., totals, means, proportions, ratios, odd ratios, regression coefficients, etc."}, {"text": "Thus, the bootstrap is mainly recommended for distribution estimation.\" There is a special consideration with the jackknife, particularly with the delete-1 observation jackknife. It should only be used with smooth, differentiable statistics (e.g., totals, means, proportions, ratios, odd ratios, regression coefficients, etc."}]}, {"question": "What is ground truth in AI", "positive_ctxs": [{"text": "Ground truth is a term used in statistics and machine learning that means checking the results of machine learning for accuracy against the real world. The term is borrowed from meteorology, where \"ground truth\" refers to information obtained on site."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Bayesian spam filtering is a common example of supervised learning. In this system, the algorithm is manually taught the differences between spam and non-spam. This depends on the ground truth of the messages used to train the algorithm \u2013 inaccuracies in the ground truth will correlate to inaccuracies in the resulting spam/non-spam verdicts."}, {"text": "is the number of pairs of points that are clustered together in the predicted partition but not in the ground truth partition etc. If the dataset is of size N, then"}, {"text": "is the number of pairs of points that are clustered together in the predicted partition but not in the ground truth partition etc. If the dataset is of size N, then"}, {"text": "In slang, the coordinates indicate where we think George Washington's nose is located, and the ground truth is where it really is. In practice a smart phone or hand-held GPS unit is routinely able to estimate the ground truth within 6\u201310 meters. Specialized instruments can reduce GPS measurement error to under a centimeter."}, {"text": "The ground truth being estimated by those coordinates is the tip of George Washington's nose on Mount Rushmore. The accuracy of the estimate is the maximum distance between the location coordinates and the ground truth. We could say in this case that the estimate accuracy is 10 meters, meaning that the point on earth represented by the location coordinates is thought to be within 10 meters of George's nose\u2014the ground truth."}, {"text": "In remote sensing, \"ground truth\" refers to information collected on location. Ground truth allows image data to be related to real features and materials on the ground. The collection of ground truth data enables calibration of remote-sensing data, and aids in the interpretation and analysis of what is being sensed."}, {"text": "Ground truth also helps with atmospheric correction. Since images from satellites obviously have to pass through the atmosphere, they can get distorted because of absorption in the atmosphere. So ground truth can help fully identify objects in satellite photos."}]}, {"question": "What does the law of large numbers mean", "positive_ctxs": [{"text": "The law of large numbers, in probability and statistics, states that as a sample size grows, its mean gets closer to the average of the whole population. In the 16th century, mathematician Gerolama Cardano recognized the Law of Large Numbers but never proved it."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does not generally converge to the population mean, even though the law of large numbers states that the sample mean will converge for independent variables."}, {"text": "Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does not generally converge to the population mean, even though the law of large numbers states that the sample mean will converge for independent variables."}, {"text": "Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does not generally converge to the population mean, even though the law of large numbers states that the sample mean will converge for independent variables."}, {"text": "There are two different versions of the law of large numbers that are described below. They are called the strong law of large numbers and the weak law of large numbers. Stated for the case where X1, X2, ... is an infinite sequence of independent and identically distributed (i.i.d.)"}, {"text": "The law of large numbers states that the larger the size of the sample, the more likely it is that the sample mean will be close to the population mean.Outside probability and statistics, a wide range of other notions of mean are often used in geometry and mathematical analysis; examples are given below."}, {"text": "The law of large numbers states that the larger the size of the sample, the more likely it is that the sample mean will be close to the population mean.Outside probability and statistics, a wide range of other notions of mean are often used in geometry and mathematical analysis; examples are given below."}, {"text": "The law of large numbers states that the larger the size of the sample, the more likely it is that the sample mean will be close to the population mean.Outside probability and statistics, a wide range of other notions of mean are often used in geometry and mathematical analysis; examples are given below."}]}, {"question": "What is the difference between random and stochastic", "positive_ctxs": [{"text": "Stochastic vs. For example, a stochastic variable is a random variable. A stochastic process is a random process. Typically, random is used to refer to a lack of dependence between observations in a sequence. For example, the rolls of a fair die are random, so are the flips of a fair coin."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Some authors regard a point process and stochastic process as two different objects such that a point process is a random object that arises from or is associated with a stochastic process, though it has been remarked that the difference between point processes and stochastic processes is not clear.Other authors consider a point process as a stochastic process, where the process is indexed by sets of the underlying space on which it is defined, such as the real line or"}, {"text": "Some authors regard a point process and stochastic process as two different objects such that a point process is a random object that arises from or is associated with a stochastic process, though it has been remarked that the difference between point processes and stochastic processes is not clear.Other authors consider a point process as a stochastic process, where the process is indexed by sets of the underlying space on which it is defined, such as the real line or"}, {"text": "An increment of a stochastic process is the difference between two random variables of the same stochastic process. For a stochastic process with an index set that can be interpreted as time, an increment is how much the stochastic process changes over a certain time period."}, {"text": "An increment of a stochastic process is the difference between two random variables of the same stochastic process. For a stochastic process with an index set that can be interpreted as time, an increment is how much the stochastic process changes over a certain time period."}, {"text": "The term random function is also used to refer to a stochastic or random process, though sometimes it is only used when the stochastic process takes real values. This term is also used when the index sets are mathematical spaces other than the real line, while the terms stochastic process and random process are usually used when the index set is interpreted as time, and other terms are used such as random field when the index set is"}, {"text": "The term random function is also used to refer to a stochastic or random process, though sometimes it is only used when the stochastic process takes real values. This term is also used when the index sets are mathematical spaces other than the real line, while the terms stochastic process and random process are usually used when the index set is interpreted as time, and other terms are used such as random field when the index set is"}, {"text": "In statistics, econometrics and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it is used to describe certain time-varying processes in nature, economics, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term (an imperfectly predictable term); thus the model is in the form of a stochastic difference equation (or recurrence relation which should not be confused with differential equation). Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive\u2013moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable."}]}, {"question": "What are descriptive analytics", "positive_ctxs": [{"text": "Descriptive analytics is a statistical method that is used to search and summarize historical data in order to identify patterns or meaning."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistical applications, data analysis can be divided into descriptive statistics, exploratory data analysis (EDA), and confirmatory data analysis (CDA). EDA focuses on discovering new features in the data while CDA focuses on confirming or falsifying existing hypotheses. Predictive analytics focuses on the application of statistical models for predictive forecasting or classification, while text analytics applies statistical, linguistic, and structural techniques to extract and classify information from textual sources, a species of unstructured data."}, {"text": "Much of the software that is currently used for learning analytics duplicates functionality of web analytics software, but applies it to learner interactions with content. Social network analysis tools are commonly used to map social connections and discussions. Some examples of learning analytics software tools include:"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": ").Chatti, Muslim and Schroeder note that the aim of open learning analytics (OLA) is to improve learning effectiveness in lifelong learning environments. The authors refer to OLA as an ongoing analytics process that encompasses diversity at all four dimensions of the learning analytics reference model."}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "The emergence of Big Data in the late 2000s led to a heightened interest in the applications of unstructured data analytics in contemporary fields such as predictive analytics and root cause analysis."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}]}, {"question": "What are some examples of artificial intelligence", "positive_ctxs": [{"text": "8 Examples of Artificial IntelligenceGoogle Maps and Ride-Hailing Applications. One doesn't have to put much thought into traveling to a new destination anymore. Face Detection and Recognition. Text Editors or Autocorrect. Search and Recommendation Algorithms. Chatbots. Digital Assistants. Social Media. E-Payments."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In 1975 distributed artificial intelligence emerged as a subfield of artificial intelligence that dealt with interactions of intelligent agents[2]. Distributed artificial intelligence systems were conceived as a group of intelligent entities, called agents, that interacted by cooperation, by coexistence or by competition. DAI is categorized into Multi-agent systems and distributed problem solving [1]."}, {"text": "In 1975 distributed artificial intelligence emerged as a subfield of artificial intelligence that dealt with interactions of intelligent agents[2]. Distributed artificial intelligence systems were conceived as a group of intelligent entities, called agents, that interacted by cooperation, by coexistence or by competition. DAI is categorized into Multi-agent systems and distributed problem solving [1]."}, {"text": "In 1975 distributed artificial intelligence emerged as a subfield of artificial intelligence that dealt with interactions of intelligent agents[2]. Distributed artificial intelligence systems were conceived as a group of intelligent entities, called agents, that interacted by cooperation, by coexistence or by competition. DAI is categorized into Multi-agent systems and distributed problem solving [1]."}, {"text": "There are some goals that almost any artificial intelligence might rationally pursue, like acquiring additional resources or self-preservation. This could prove problematic because it might put an artificial intelligence in direct competition with humans."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "Kaplan and Haenlein structure artificial intelligence along three evolutionary stages: 1) artificial narrow intelligence \u2013 applying AI only to specific tasks; 2) artificial general intelligence \u2013 applying AI to several areas and able to autonomously solve problems they were never even designed for; and 3) artificial super intelligence \u2013 applying AI to any area capable of scientific creativity, social skills, and general wisdom.To allow comparison with human performance, artificial intelligence can be evaluated on constrained and well-defined problems. Such tests have been termed subject matter expert Turing tests. Also, smaller problems provide more achievable goals and there are an ever-increasing number of positive results."}]}, {"question": "What is shift invariance in a convolutional neural network CNN", "positive_ctxs": [{"text": "Shift-invariance: this means that if we shift the input in time (or shift the entries in a vector) then the output is shifted by the same amount."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics. They have applications in image and video recognition, recommender systems, image classification, Image segmentation, medical image analysis, natural language processing, brain-computer interfaces, and financial time series.CNNs are regularized versions of multilayer perceptrons."}, {"text": "In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics. They have applications in image and video recognition, recommender systems, image classification, Image segmentation, medical image analysis, natural language processing, brain-computer interfaces, and financial time series.CNNs are regularized versions of multilayer perceptrons."}, {"text": "In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics. They have applications in image and video recognition, recommender systems, image classification, Image segmentation, medical image analysis, natural language processing, brain-computer interfaces, and financial time series.CNNs are regularized versions of multilayer perceptrons."}, {"text": "In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics. They have applications in image and video recognition, recommender systems, image classification, Image segmentation, medical image analysis, natural language processing, brain-computer interfaces, and financial time series.CNNs are regularized versions of multilayer perceptrons."}, {"text": "In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics. They have applications in image and video recognition, recommender systems, image classification, Image segmentation, medical image analysis, natural language processing, brain-computer interfaces, and financial time series.CNNs are regularized versions of multilayer perceptrons."}, {"text": "In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics. They have applications in image and video recognition, recommender systems, image classification, Image segmentation, medical image analysis, natural language processing, brain-computer interfaces, and financial time series.CNNs are regularized versions of multilayer perceptrons."}, {"text": "In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics. They have applications in image and video recognition, recommender systems, image classification, Image segmentation, medical image analysis, natural language processing, brain-computer interfaces, and financial time series.CNNs are regularized versions of multilayer perceptrons."}]}, {"question": "What do you use the TF Feature_column Bucketized_column function for", "positive_ctxs": [{"text": "bucketized_column. Represents discretized dense input bucketed by boundaries ."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "You are allowed to select k of these n boxes all at once and break them open simultaneously, gaining access to k keys. What is the probability that using these keys you can open all n boxes, where you use a found key to open the box it belongs to and repeat."}, {"text": "Negotiators often use this tactic to calm tense situations: \"an apology can defuse emotions effectively, even when you do not acknowledge personal responsibility for the action or admit an intention to harm. An apology may be one of the least costly and most rewarding investments you can make.\""}, {"text": "Aspect is unusual in ASL in that transitive verbs derived for aspect lose their transitivity. That is, while you can sign 'dog chew bone' for the dog chewed on a bone, or 'she look-at me' for she looked at me, you cannot do the same in the durative to mean the dog gnawed on the bone or she stared at me. Instead, you must use other strategies, such as a topic construction (see below) to avoid having an object for the verb."}]}, {"question": "When would you use a negative binomial distribution", "positive_ctxs": [{"text": "Negative binomial regression \u2013 Negative binomial regression can be used for over-dispersed count data, that is when the conditional variance exceeds the conditional mean."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Because of this, the negative binomial distribution is also known as the gamma\u2013Poisson (mixture) distribution. The negative binomial distribution was originally derived as a limiting case of the gamma-Poisson distribution."}, {"text": "Because of this, the negative binomial distribution is also known as the gamma\u2013Poisson (mixture) distribution. The negative binomial distribution was originally derived as a limiting case of the gamma-Poisson distribution."}, {"text": "We could similarly use the negative binomial distribution to model the number of days a certain machine works before it breaks down (r = 1)."}, {"text": "We could similarly use the negative binomial distribution to model the number of days a certain machine works before it breaks down (r = 1)."}, {"text": "The negative binomial distribution also arises as a continuous mixture of Poisson distributions (i.e. a compound probability distribution) where the mixing distribution of the Poisson rate is a gamma distribution. That is, we can view the negative binomial as a Poisson(\u03bb) distribution, where \u03bb is itself a random variable, distributed as a gamma distribution with shape = r and scale \u03b8 = p/(1 \u2212 p) or correspondingly rate \u03b2 = (1 \u2212 p)/p."}, {"text": "The negative binomial distribution also arises as a continuous mixture of Poisson distributions (i.e. a compound probability distribution) where the mixing distribution of the Poisson rate is a gamma distribution. That is, we can view the negative binomial as a Poisson(\u03bb) distribution, where \u03bb is itself a random variable, distributed as a gamma distribution with shape = r and scale \u03b8 = p/(1 \u2212 p) or correspondingly rate \u03b2 = (1 \u2212 p)/p."}, {"text": "In other words, the alternatively parameterized negative binomial distribution converges to the Poisson distribution and r controls the deviation from the Poisson. This makes the negative binomial distribution suitable as a robust alternative to the Poisson, which approaches the Poisson for large r, but which has larger variance than the Poisson for small r."}]}, {"question": "How do you explain covariance", "positive_ctxs": [{"text": "Covariance measures the directional relationship between the returns on two assets. A positive covariance means that asset returns move together while a negative covariance means they move inversely."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influence a child's cognitive functioning?). Then you can still do research, but it is not causal, it is correlational."}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}]}, {"question": "What is a sampling error in research", "positive_ctxs": [{"text": "A sampling error is a statistical error that occurs when an analyst does not select a sample that represents the entire population of data and the results found in the sample do not represent the results that would be obtained from the entire population."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The sampling error is the error caused by observing a sample instead of the whole population. The sampling error is the difference between a sample statistic used to estimate a population parameter and the actual but unknown value of the parameter."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}]}, {"question": "How much does Predictive Analytics cost", "positive_ctxs": [{"text": "The marketplace for predictive analytics software has ballooned: G2Crowd records 92 results in the category. Pricing varies substantially based on the number of users and, in some cases, amount of data, but generally starts around $1,000 per year, though it can easily scale into six figures."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}]}, {"question": "What is denying the antecedent in relation to a propositional fallacy", "positive_ctxs": [{"text": "Denying the antecedent, sometimes also called inverse error or fallacy of the inverse, is a formal fallacy of inferring the inverse from the original statement. It is committed by reasoning in the form: If P, then Q. Therefore, if not P, then not Q."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In propositional logic, modus tollens () (MT), also known as modus tollendo tollens (Latin for \"mode that by denying denies\") and denying the consequent, is a deductive argument form and a rule of inference. Modus tollens takes the form of \"If P, then Q. Therefore, not P.\" It is an application of the general truth that if a statement is true, then so is its contrapositive."}, {"text": "Stalnaker's account differs from Lewis's most notably in his acceptance of the limit and uniqueness assumptions. The uniqueness assumption is the thesis that, for any antecedent A, among the possible worlds where A is true, there is a single (unique) one that is closest to the actual world. The limit assumption is the thesis that, for a given antecedent A, if there is a chain of possible worlds where A is true, each closer to the actual world than its predecessor, then the chain has a limit: a possible world where A is true that is closer to the actual worlds than all worlds in the chain."}, {"text": "Denying the antecedent, sometimes also called inverse error or fallacy of the inverse, is a formal fallacy of inferring the inverse from the original statement. It is committed by reasoning in the form:"}, {"text": "Therefore, one is not looking for data in relation to another group but rather, one is seeking data in relation to the grand mean.Effects coding can either be weighted or unweighted. Weighted effects coding is simply calculating a weighted grand mean, thus taking into account the sample size in each variable. This is most appropriate in situations where the sample is representative of the population in question."}, {"text": "Therefore, one is not looking for data in relation to another group but rather, one is seeking data in relation to the grand mean.Effects coding can either be weighted or unweighted. Weighted effects coding is simply calculating a weighted grand mean, thus taking into account the sample size in each variable. This is most appropriate in situations where the sample is representative of the population in question."}, {"text": "Therefore, one is not looking for data in relation to another group but rather, one is seeking data in relation to the grand mean.Effects coding can either be weighted or unweighted. Weighted effects coding is simply calculating a weighted grand mean, thus taking into account the sample size in each variable. This is most appropriate in situations where the sample is representative of the population in question."}, {"text": "This is important because the frequency-current relation (f-I-curve) is often used by experimentalists to characterize a neuron. It is also the transfer function in"}]}, {"question": "What are the methods of estimation in statistics", "positive_ctxs": [{"text": "An estimate of a population parameter may be expressed in two ways: Point estimate. A point estimate of a population parameter is a single value of a statistic. For example, the sample mean x is a point estimate of the population mean \u03bc."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Physics has for long employed a weighted averages method that is similar to meta-analysis.Estimation statistics in the modern era started with the development of the standardized effect size by Jacob Cohen in the 1960s. Research synthesis using estimation statistics was pioneered by Gene V. Glass with the development of the method of meta-analysis in the 1970s. Estimation methods have been refined since by Larry Hedges, Michael Borenstein, Doug Altman, Martin Gardner, Geoff Cumming and others."}, {"text": "Applied statistics comprises descriptive statistics and the application of inferential statistics. Theoretical statistics concerns the logical arguments underlying justification of approaches to statistical inference, as well as encompassing mathematical statistics. Mathematical statistics includes not only the manipulation of probability distributions necessary for deriving results related to methods of estimation and inference, but also various aspects of computational statistics and the design of experiments."}, {"text": "Applied statistics comprises descriptive statistics and the application of inferential statistics. Theoretical statistics concerns the logical arguments underlying justification of approaches to statistical inference, as well as encompassing mathematical statistics. Mathematical statistics includes not only the manipulation of probability distributions necessary for deriving results related to methods of estimation and inference, but also various aspects of computational statistics and the design of experiments."}, {"text": "Applied statistics comprises descriptive statistics and the application of inferential statistics. Theoretical statistics concerns the logical arguments underlying justification of approaches to statistical inference, as well as encompassing mathematical statistics. Mathematical statistics includes not only the manipulation of probability distributions necessary for deriving results related to methods of estimation and inference, but also various aspects of computational statistics and the design of experiments."}, {"text": "Applied statistics comprises descriptive statistics and the application of inferential statistics. Theoretical statistics concerns the logical arguments underlying justification of approaches to statistical inference, as well as encompassing mathematical statistics. Mathematical statistics includes not only the manipulation of probability distributions necessary for deriving results related to methods of estimation and inference, but also various aspects of computational statistics and the design of experiments."}, {"text": "Applied statistics comprises descriptive statistics and the application of inferential statistics. Theoretical statistics concerns the logical arguments underlying justification of approaches to statistical inference, as well as encompassing mathematical statistics. Mathematical statistics includes not only the manipulation of probability distributions necessary for deriving results related to methods of estimation and inference, but also various aspects of computational statistics and the design of experiments."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}]}, {"question": "How do you find the marginal density function", "positive_ctxs": [{"text": "4:306:35Suggested clip \u00b7 77 secondsMarginal PDF from Joint PDF - YouTubeYouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "For i = 1, 2, ...,n, let fXi(xi) be the probability density function associated with variable Xi alone. This is called the marginal density function, and can be deduced from the probability density associated with the random variables X1, ..., Xn by integrating over all values of the other n \u2212 1 variables:"}, {"text": "For i = 1, 2, ...,n, let fXi(xi) be the probability density function associated with variable Xi alone. This is called the marginal density function, and can be deduced from the probability density associated with the random variables X1, ..., Xn by integrating over all values of the other n \u2212 1 variables:"}, {"text": "For i = 1, 2, ...,n, let fXi(xi) be the probability density function associated with variable Xi alone. This is called the marginal density function, and can be deduced from the probability density associated with the random variables X1, ..., Xn by integrating over all values of the other n \u2212 1 variables:"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "The joint probability distribution can be expressed either in terms of a joint cumulative distribution function or in terms of a joint probability density function (in the case of continuous variables) or joint probability mass function (in the case of discrete variables). These in turn can be used to find two other types of distributions: the marginal distribution giving the probabilities for any one of the variables with no reference to any specific ranges of values for the other variables, and the conditional probability distribution giving the probabilities for any subset of the variables conditional on particular values of the remaining variables."}]}, {"question": "How long do neural networks take to train", "positive_ctxs": [{"text": "It might take about 2-4 hours of coding and 1-2 hours of training if done in Python and Numpy (assuming sensible parameter initialization and a good set of hyperparameters). No GPU required, your old but gold CPU on a laptop will do the job. Longer training time is expected if the net is deeper than 2 hidden layers."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Igor Aleksander, emeritus professor of Neural Systems Engineering at Imperial College, has extensively researched artificial neural networks and claims in his book Impossible Minds: My Neurons, My Consciousness that the principles for creating a conscious machine already exist but that it would take forty years to train such a machine to understand language. Whether this is true remains to be demonstrated and the basic principle stated in Impossible Minds\u2014that the brain is a neural state machine\u2014is open to doubt."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "For many applications, the training data is less available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain."}, {"text": "For many applications, the training data is less available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain."}, {"text": "For many applications, the training data is less available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain."}, {"text": "For many applications, the training data is less available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain."}]}, {"question": "What is the purpose of factor analysis", "positive_ctxs": [{"text": "The purpose of factor analysis is to reduce many individual items into a fewer number of dimensions. Factor analysis can be used to simplify data, such as reducing the number of variables in regression models."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Canonical factor analysis seeks factors which have the highest canonical correlation with the observed variables. Canonical factor analysis is unaffected by arbitrary rescaling of the data."}, {"text": "Higher-order factor analysis is a statistical method consisting of repeating steps factor analysis \u2013 oblique rotation \u2013 factor analysis of rotated factors. Its merit is to enable the researcher to see the hierarchical structure of studied phenomena. To interpret the results, one proceeds either by post-multiplying the primary factor pattern matrix by the higher-order factor pattern matrices (Gorsuch, 1983) and perhaps applying a Varimax rotation to the result (Thompson, 1990) or by using a Schmid-Leiman solution (SLS, Schmid & Leiman, 1957, also known as Schmid-Leiman transformation) which attributes the variation from the primary factors to the second-order factors."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "Interpreting factor analysis is based on using a \"heuristic\", which is a solution that is \"convenient even if not absolutely true\". More than one interpretation can be made of the same data factored the same way, and factor analysis cannot identify causality."}, {"text": "Factor analysis is similar to principal component analysis, in that factor analysis also involves linear combinations of variables. Different from PCA, factor analysis is a correlation-focused approach seeking to reproduce the inter-correlations among variables, in which the factors \"represent the common variance of variables, excluding unique variance\". In terms of the correlation matrix, this corresponds with focusing on explaining the off-diagonal terms (that is, shared co-variance), while PCA focuses on explaining the terms that sit on the diagonal."}]}, {"question": "What is l2 regularization in neural networks", "positive_ctxs": [{"text": "Neural network regularization is a technique used to reduce the likelihood of model overfitting. There are several forms of regularization. The most common form is called L2 regularization. L2 regularization tries to reduce the possibility of overfitting by keeping the values of the weights and biases small."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Ans and Rousset (1997) also proposed a two-network artificial neural architecture with memory self-refreshing that overcomes catastrophic interference when sequential learning tasks are carried out in distributed networks trained by backpropagation. The principle is to interleave, at the time when new external patterns are learned, those to-be-learned new external patterns with internally generated pseudopatterns, or 'pseudo-memories', that reflect the previously learned information. What mainly distinguishes this model from those that use classical pseudorehearsal in feedforward multilayer networks is a reverberating process that is used for generating pseudopatterns."}, {"text": "Along with rising interest in neural networks beginning in the mid 1980s, interest grew in deep reinforcement learning where a neural network is used to represent policies or value functions. As in such a system, the entire decision making process from sensors to motors in a robot or agent involves a single layered neural network, it is sometimes called end-to-end reinforcement learning. One of the first successful applications of reinforcement learning with neural networks was TD-Gammon, a computer program developed in 1992 for playing backgammon."}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}]}, {"question": "What is the use of iterator", "positive_ctxs": [{"text": "An Iterator is an object that can be used to loop through collections, like ArrayList and HashSet. It is called an \"iterator\" because \"iterating\" is the technical term for looping. To use an Iterator, you must import it from the java."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "In this way, an interpretation provides semantic meaning to the terms, the predicates, and formulas of the language. The study of the interpretations of formal languages is called formal semantics. What follows is a description of the standard or Tarskian semantics for first-order logic."}, {"text": "Consider the ordered list {1,2,3,4} which contains four data values. What is the 75th percentile of this list using the Microsoft Excel method?"}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "What is the period of oscillation T of a mass m attached to an ideal linear spring with spring constant k suspended in gravity of strength g? That period is the solution for T of some dimensionless equation in the variables T, m, k, and g."}, {"text": "Consider the ordered list {15, 20, 35, 40, 50}, which contains five data values. What is the 40th percentile of this list using the NIST method?"}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}]}, {"question": "How do you calculate bag words", "positive_ctxs": [{"text": "Some additional simple scoring methods include:Counts. Count the number of times each word appears in a document.Frequencies. Calculate the frequency that each word appears in a document out of all the words in the document."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Imagine there are two literal bags full of words. One bag is filled with words found in spam messages, and the other with words found in legitimate e-mail. While any given word is likely to be somewhere in both bags, the \"spam\" bag will contain spam-related words such as \"stock\", \"Viagra\", and \"buy\" significantly more frequently, while the \"ham\" bag will contain more words related to the user's friends or workplace."}, {"text": "Imagine there are two literal bags full of words. One bag is filled with words found in spam messages, and the other with words found in legitimate e-mail. While any given word is likely to be somewhere in both bags, the \"spam\" bag will contain spam-related words such as \"stock\", \"Viagra\", and \"buy\" significantly more frequently, while the \"ham\" bag will contain more words related to the user's friends or workplace."}, {"text": "Imagine there are two literal bags full of words. One bag is filled with words found in spam messages, and the other with words found in legitimate e-mail. While any given word is likely to be somewhere in both bags, the \"spam\" bag will contain spam-related words such as \"stock\", \"Viagra\", and \"buy\" significantly more frequently, while the \"ham\" bag will contain more words related to the user's friends or workplace."}, {"text": "Imagine there are two literal bags full of words. One bag is filled with words found in spam messages, and the other with words found in legitimate e-mail. While any given word is likely to be somewhere in both bags, the \"spam\" bag will contain spam-related words such as \"stock\", \"Viagra\", and \"buy\" significantly more frequently, while the \"ham\" bag will contain more words related to the user's friends or workplace."}, {"text": "Limitations of bag of words model (BOW), where a text is represented as an unordered collection of words. To address some of the limitation of bag of words model (BOW), multi-gram dictionary can be used to find direct and indirect association as well as higher-order co-occurrences among terms."}, {"text": "Limitations of bag of words model (BOW), where a text is represented as an unordered collection of words. To address some of the limitation of bag of words model (BOW), multi-gram dictionary can be used to find direct and indirect association as well as higher-order co-occurrences among terms."}]}, {"question": "How does Lstm overcomes vanishing gradient problem", "positive_ctxs": [{"text": "LSTMs solve the problem using a unique additive gradient structure that includes direct access to the forget gate's activations, enabling the network to encourage desired behaviour from the error gradient using frequent gates update on every time step of the learning process."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Hardware advances have meant that from 1991 to 2015, computer power (especially as delivered by GPUs) has increased around a million-fold, making standard backpropagation feasible for networks several layers deeper than when the vanishing gradient problem was recognized. Schmidhuber notes that this \"is basically what is winning many of the image recognition competitions now\", but that it \"does not really overcome the problem in a fundamental way\" since the original models tackling the vanishing gradient problem by Hinton and others were trained in a Xeon processor, not GPUs."}, {"text": "Hardware advances have meant that from 1991 to 2015, computer power (especially as delivered by GPUs) has increased around a million-fold, making standard backpropagation feasible for networks several layers deeper than when the vanishing gradient problem was recognized. Schmidhuber notes that this \"is basically what is winning many of the image recognition competitions now\", but that it \"does not really overcome the problem in a fundamental way\" since the original models tackling the vanishing gradient problem by Hinton and others were trained in a Xeon processor, not GPUs."}, {"text": "Long short-term memory (LSTM) is a deep learning system that avoids the vanishing gradient problem. LSTM is normally augmented by recurrent gates called \u201cforget gates\u201d. LSTM prevents backpropagated errors from vanishing or exploding."}, {"text": "Long short-term memory (LSTM) is a deep learning system that avoids the vanishing gradient problem. LSTM is normally augmented by recurrent gates called \u201cforget gates\u201d. LSTM prevents backpropagated errors from vanishing or exploding."}, {"text": "Long short-term memory (LSTM) is a deep learning system that avoids the vanishing gradient problem. LSTM is normally augmented by recurrent gates called \u201cforget gates\u201d. LSTM prevents backpropagated errors from vanishing or exploding."}, {"text": "Long short-term memory (LSTM) is a deep learning system that avoids the vanishing gradient problem. LSTM is normally augmented by recurrent gates called \u201cforget gates\u201d. LSTM prevents backpropagated errors from vanishing or exploding."}, {"text": "Long short-term memory (LSTM) is a deep learning system that avoids the vanishing gradient problem. LSTM is normally augmented by recurrent gates called \u201cforget gates\u201d. LSTM prevents backpropagated errors from vanishing or exploding."}]}, {"question": "What does a sigmoid curve mean", "positive_ctxs": [{"text": "In its simplest form, the sigmoid is a representation of time (on the horizontal axis) and activity (on the vertical axis). The wonder of this curve is that it really describes most phenomena, regardless of type. The phenomenon experiences sharp growth. It hits a maturity phase where growth slows, and then stops."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A sigmoid function is a mathematical function having a characteristic \"S\"-shaped curve or sigmoid curve. A common example of a sigmoid function is the logistic function shown in the first figure and defined by the formula:"}, {"text": "A sigmoid function is a mathematical function having a characteristic \"S\"-shaped curve or sigmoid curve. A common example of a sigmoid function is the logistic function shown in the first figure and defined by the formula:"}, {"text": "A sigmoid function is a mathematical function having a characteristic \"S\"-shaped curve or sigmoid curve. A common example of a sigmoid function is the logistic function shown in the first figure and defined by the formula:"}, {"text": "A sigmoid function is a bounded, differentiable, real function that is defined for all real input values and has a non-negative derivative at each point and exactly one inflection point. A sigmoid \"function\" and a sigmoid \"curve\" refer to the same object."}, {"text": "A sigmoid function is a bounded, differentiable, real function that is defined for all real input values and has a non-negative derivative at each point and exactly one inflection point. A sigmoid \"function\" and a sigmoid \"curve\" refer to the same object."}, {"text": "A sigmoid function is a bounded, differentiable, real function that is defined for all real input values and has a non-negative derivative at each point and exactly one inflection point. A sigmoid \"function\" and a sigmoid \"curve\" refer to the same object."}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}]}, {"question": "Why would you use a logarithmic scale", "positive_ctxs": [{"text": "There are two main reasons to use logarithmic scales in charts and graphs. The first is to respond to skewness towards large values; i.e., cases in which one or a few points are much larger than the bulk of the data. The second is to show percent change or multiplicative factors."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is an anomaly for a small city to field such a good team. the soccer scores and great soccer team) indirectly described a condition by which the observer inferred a new meaningful pattern\u2014that the small city was no longer small. Why would you put a large city of your best and brightest in the middle of nowhere?"}, {"text": "This is used in the moment magnitude scale or the Richter magnitude scale. For example, a 5.0 earthquake releases 32 times (101.5) and a 6.0 releases 1000 times (103) the energy of a 4.0. Another logarithmic scale is apparent magnitude."}, {"text": "This gives rise to a logarithmic spiral. Benford's law on the distribution of leading digits can also be explained by scale invariance. Logarithms are also linked to self-similarity."}, {"text": "From 1912 to 1934 Gosset and Fisher would exchange more than 150 letters. In 1924, Gosset wrote in a letter to Fisher, \"I am sending you a copy of Student's Tables as you are the only man that's ever likely to use them!\" Fisher believed that Gosset had effected a \"logical revolution\"."}, {"text": "are interval scales with arbitrary zeros, so the computed coefficient of variation would be different depending on which scale you used. On the other hand, Kelvin temperature has a meaningful zero, the complete absence of thermal energy, and thus is a ratio scale. In plain language, it is meaningful to say that 20 Kelvin is twice as hot as 10 Kelvin, but only in this scale with a true absolute zero."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "What is symmetric data distribution", "positive_ctxs": [{"text": "Symmetrical distribution occurs when the values of variables occur at regular frequencies and the mean, median and mode occur at the same point. In graph form, symmetrical distribution often appears as a bell curve. If a line were drawn dissecting the middle of the graph, it would show two sides that mirror each other."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "As mentioned earlier, a unimodal distribution with zero value of skewness does not imply that this distribution is symmetric necessarily. However, a symmetric unimodal or multimodal distribution always has zero skewness."}, {"text": "As mentioned earlier, a unimodal distribution with zero value of skewness does not imply that this distribution is symmetric necessarily. However, a symmetric unimodal or multimodal distribution always has zero skewness."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "If the distribution is symmetric, then the mean is equal to the median, and the distribution has zero skewness. If the distribution is both symmetric and unimodal, then the mean = median = mode. This is the case of a coin toss or the series 1,2,3,4,..."}, {"text": "If the distribution is symmetric, then the mean is equal to the median, and the distribution has zero skewness. If the distribution is both symmetric and unimodal, then the mean = median = mode. This is the case of a coin toss or the series 1,2,3,4,..."}, {"text": "The third central moment is the measure of the lopsidedness of the distribution; any symmetric distribution will have a third central moment, if defined, of zero. The normalised third central moment is called the skewness, often \u03b3. A distribution that is skewed to the left (the tail of the distribution is longer on the left) will have a negative skewness."}, {"text": "in the exponent ensures that the distribution has unit variance (i.e., variance being equal to one), and therefore also unit standard deviation. This function is symmetric around"}]}, {"question": "What are the advantages and disadvantages of Dimensional Analysis", "positive_ctxs": [{"text": "(i) The value of dimensionless constants cannot be determined by this method. (ii) This method cannot be applied to equations involving exponential and trigonometric functions. (iii) It cannot be applied to an equation involving more than three physical quantities."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Since both advantages and disadvantages present on the two way of modeling, combining both approaches will be a good modeling in practice. For example, in Marras' article A Joint Discriminative Generative Model for Deformable Model Construction and Classification, he and his coauthors apply the combination of two modelings on face classification of the models, and receive a higher accuracy than the traditional approach."}, {"text": "The choice of numerator layout in the introductory sections below does not imply that this is the \"correct\" or \"superior\" choice. There are advantages and disadvantages to the various layout types. Serious mistakes can result from carelessly combining formulas written in different layouts, and converting from one layout to another requires care to avoid errors."}, {"text": "Ronald J. Brachman; What IS-A is and isn't. An Analysis of Taxonomic Links in Semantic Networks; IEEE Computer, 16 (10); October 1983"}, {"text": "Some of the disadvantages of this method are the absence of smaller structures in the reconstructed image and degradation of image resolution. This edge preserving TV algorithm, however, requires fewer iterations than the conventional TV algorithm. Analyzing the horizontal and vertical intensity profiles of the reconstructed images, it can be seen that there are sharp jumps at edge points and negligible, minor fluctuation at non-edge points."}, {"text": "Soft robots may also be used for the creation of flexible exosuits, for rehabilitation of patients, assisting the elderly, or simply enhancing the user's strength. A team from Harvard created an exosuit using these materials in order to give the advantages of the additional strength provided by an exosuit, without the disadvantages that come with how rigid materials restrict a person's natural movement. The exosuits are metal frameworks fitted with motorized muscles to multiply the wearer\u2019s strength."}, {"text": "Connectionism is an approach in the fields of cognitive science that hopes to explain mental phenomena using artificial neural networks (ANN). Connectionism presents a cognitive theory based on simultaneously occurring, distributed signal activity via connections that can be represented numerically, where learning occurs by modifying connection strengths based on experience.Some advantages of the connectionist approach include its applicability to a broad array of functions, structural approximation to biological neurons, low requirements for innate structure, and capacity for graceful degradation. Some disadvantages include the difficulty in deciphering how ANNs process information, or account for the compositionality of mental representations, and a resultant difficulty explaining phenomena at a higher level.The success of deep learning networks in the past decade has greatly increased the popularity of this approach, but the complexity and scale of such networks has brought with them increased interpretability problems."}, {"text": "Hydrostatic CVTs use a variable displacement pump and a hydraulic motor, therefore the transmission converts hydraulic pressure to the rotation of the output shaft. The advantages of hydrostatic CVTs are:"}]}, {"question": "What is the difference between intelligence and artificial intelligence", "positive_ctxs": [{"text": "Conclusion. Human intelligence revolves around adapting to the environment using a combination of several cognitive processes. The field of Artificial intelligence focuses on designing machines that can mimic human behavior. However, AI researchers are able to go as far as implementing Weak AI, but not the Strong AI."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "\u2014is measured, and simultaneously halving the factor loadings for verbal intelligence makes no difference to the model. Thus, no generality is lost by assuming that the standard deviation of the factors for verbal intelligence is"}, {"text": "Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen. 'Strong' AI is usually labelled as AGI (Artificial General Intelligence) while attempts to emulate 'natural' intelligence have been called ABI (Artificial Biological Intelligence)."}, {"text": "Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen. 'Strong' AI is usually labelled as AGI (Artificial General Intelligence) while attempts to emulate 'natural' intelligence have been called ABI (Artificial Biological Intelligence)."}, {"text": "Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen. 'Strong' AI is usually labelled as AGI (Artificial General Intelligence) while attempts to emulate 'natural' intelligence have been called ABI (Artificial Biological Intelligence)."}, {"text": "Artificial intelligence is employed in online dispute resolution platforms that use optimization algorithms and blind-bidding. Artificial intelligence is also frequently employed in modeling the legal ontology, \"an explicit, formal, and general specification of a conceptualization of properties of and relations between objects in a given domain\".Artificial intelligence and law (AI and law) is a subfield of artificial intelligence (AI) mainly concerned with applications of AI to legal informatics problems and original research on those problems. It is also concerned to contribute in the other direction: to export tools and techniques developed in the context of legal problems to AI in general."}]}, {"question": "How do you write a hypothesis and null hypothesis", "positive_ctxs": [{"text": "To write a null hypothesis, first start by asking a question. Rephrase that question in a form that assumes no relationship between the variables. In other words, assume a treatment has no effect. Write your hypothesis in a way that reflects this."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant."}, {"text": "Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant."}, {"text": "Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant."}, {"text": "Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant."}, {"text": "Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant."}, {"text": "Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant."}, {"text": "Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant."}]}, {"question": "What is the difference between stemming and Lemmatization", "positive_ctxs": [{"text": "In simple words, stemming technique only looks at the form of the word whereas lemmatization technique looks at the meaning of the word. It means after applying lemmatization, we will always get a valid word."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In psychophysical terms, the size difference between A and C is above the just noticeable difference ('jnd') while the size differences between A and B and B and C are below the jnd."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration."}, {"text": "the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration."}, {"text": "There is a simple difference formula to compute the rank-biserial correlation from the common language effect size: the correlation is the difference between the proportion of pairs favorable to the hypothesis (f) minus its complement (i.e. : the proportion that is unfavorable (u)). This simple difference formula is just the difference of the common language effect size of each group, and is as follows:"}, {"text": "There is a simple difference formula to compute the rank-biserial correlation from the common language effect size: the correlation is the difference between the proportion of pairs favorable to the hypothesis (f) minus its complement (i.e. : the proportion that is unfavorable (u)). This simple difference formula is just the difference of the common language effect size of each group, and is as follows:"}]}, {"question": "What is coefficients of linear Discriminants", "positive_ctxs": [{"text": "Coefficients of linear discriminants: Shows the linear combination of predictor variables that are used to form the LDA decision rule. for example, LD1 = 0.91*Sepal."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "\u03b7 is expressed as linear combinations (thus, \"linear\") of unknown parameters \u03b2. The coefficients of the linear combination are represented as the matrix of independent variables X. \u03b7 can thus be expressed as"}, {"text": "\u03b7 is expressed as linear combinations (thus, \"linear\") of unknown parameters \u03b2. The coefficients of the linear combination are represented as the matrix of independent variables X. \u03b7 can thus be expressed as"}, {"text": "For linear models, the indirect effect can be computed by taking the product of all the path coefficients along a mediated pathway. The total indirect effect is computed by the sum of the individual indirect effects. For linear models mediation is indicated when the coefficients of an equation fitted without including the mediator vary significantly from an equation that includes it."}, {"text": "What is the period of oscillation T of a mass m attached to an ideal linear spring with spring constant k suspended in gravity of strength g? That period is the solution for T of some dimensionless equation in the variables T, m, k, and g."}, {"text": "is projected onto the nearest vector of coefficients that satisfies the given constraints. (Typically Euclidean distances are used.) The process is then repeated until a near-optimal vector of coefficients is obtained."}, {"text": "is projected onto the nearest vector of coefficients that satisfies the given constraints. (Typically Euclidean distances are used.) The process is then repeated until a near-optimal vector of coefficients is obtained."}]}, {"question": "How does dimensionality reduction work", "positive_ctxs": [{"text": "The higher the number of features, the harder it gets to visualize the training set and then work on it. Dimensionality reduction is the process of reducing the number of random variables under consideration, by obtaining a set of principal variables. It can be divided into feature selection and feature extraction."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Dimensionality reduction loses information, in general. PCA-based dimensionality reduction tends to minimize that information loss, under certain signal and noise models."}, {"text": "Dimensionality reduction loses information, in general. PCA-based dimensionality reduction tends to minimize that information loss, under certain signal and noise models."}, {"text": "Dimensionality reduction loses information, in general. PCA-based dimensionality reduction tends to minimize that information loss, under certain signal and noise models."}, {"text": "Dimensionality reduction loses information, in general. PCA-based dimensionality reduction tends to minimize that information loss, under certain signal and noise models."}, {"text": "T-distributed Stochastic Neighbor Embedding (t-SNE) is a non-linear dimensionality reduction technique useful for visualization of high-dimensional datasets. It is not recommended for use in analysis such as clustering or outlier detection since it does not necessarily preserve densities or distances well."}, {"text": "T-distributed Stochastic Neighbor Embedding (t-SNE) is a non-linear dimensionality reduction technique useful for visualization of high-dimensional datasets. It is not recommended for use in analysis such as clustering or outlier detection since it does not necessarily preserve densities or distances well."}, {"text": "The vector space associated with these vectors is often called the feature space. In order to reduce the dimensionality of the feature space, a number of dimensionality reduction techniques can be employed."}]}, {"question": "How do you interpret Sorensen index of similarity", "positive_ctxs": [{"text": "Both indices take values from zero to one. In a similarity index, a value of 1 means that the two communities you are comparing share all their species, while a value of 0 means they share none. In a dissimilarity index the interpretation is the opposite: 1 means that the communities are totally different."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "is not taken into account and can vary from 0 upward without bound.Jaccard indexThe Jaccard index is used to quantify the similarity between two datasets. The Jaccard index takes on a value between 0 and 1. An index of 1 means that the two dataset are identical, and an index of 0 indicates that the datasets have no common elements."}, {"text": "is not taken into account and can vary from 0 upward without bound.Jaccard indexThe Jaccard index is used to quantify the similarity between two datasets. The Jaccard index takes on a value between 0 and 1. An index of 1 means that the two dataset are identical, and an index of 0 indicates that the datasets have no common elements."}, {"text": "Fowlkes\u2013Mallows indexThe Fowlkes\u2013Mallows index computes the similarity between the clusters returned by the clustering algorithm and the benchmark classifications. The higher the value of the Fowlkes\u2013Mallows index the more similar the clusters and the benchmark classifications are. It can be computed using the following formula:"}, {"text": "Fowlkes\u2013Mallows indexThe Fowlkes\u2013Mallows index computes the similarity between the clusters returned by the clustering algorithm and the benchmark classifications. The higher the value of the Fowlkes\u2013Mallows index the more similar the clusters and the benchmark classifications are. It can be computed using the following formula:"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}]}, {"question": "Can we incorporate genetic algorithm concept to artificial neural network", "positive_ctxs": [{"text": "In short, the problem with neural networks is that a number of parameter have to be set before any training can begin. However, there are no clear rules how to set these parameters. By combining genetic algorithms with neural networks (GANN), the genetic algorithm is used to find these parameters."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Genetic memory uses genetic algorithm and sparse distributed memory as a pseudo artificial neural network. It has been considered for use in creating artificial life."}, {"text": "One solution is to use an (adapted) artificial neural network as a function approximator. Function approximation may speed up learning in finite problems, due to the fact that the algorithm can generalize earlier experiences to previously unseen states."}, {"text": "The AOD also uses artificial intelligence in speech recognition software. The air traffic controllers are giving directions to the artificial pilots and the AOD wants to the pilots to respond to the ATC's with simple responses. The programs that incorporate the speech software must be trained, which means they use neural networks."}, {"text": "The AOD also uses artificial intelligence in speech recognition software. The air traffic controllers are giving directions to the artificial pilots and the AOD wants to the pilots to respond to the ATC's with simple responses. The programs that incorporate the speech software must be trained, which means they use neural networks."}, {"text": "AlphaGo and its successors use a Monte Carlo tree search algorithm to find its moves based on knowledge previously acquired by machine learning, specifically by an artificial neural network (a deep learning method) by extensive training, both from human and computer play. A neural network is trained to identify the best moves and the winning percentages of these moves. This neural network improves the strength of the tree search, resulting in stronger move selection in the next iteration."}, {"text": "With respect to other advanced machine learning approaches, such as artificial neural networks, random forests, or genetic programming, learning classifier systems are particularly well suited to problems that require interpretable solutions."}, {"text": "A neural network (NN), in the case of artificial neurons called artificial neural network (ANN) or simulated neural network (SNN), is an interconnected group of natural or artificial neurons that uses a mathematical or computational model for information processing based on a connectionistic approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network."}]}, {"question": "What is the use of probability distribution", "positive_ctxs": [{"text": "Probability distributions are a fundamental concept in statistics. They are used both on a theoretical level and a practical level. Some practical uses of probability distributions are: To calculate confidence intervals for parameters and to calculate critical regions for hypothesis tests."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A probability distribution can be viewed as a partition of a set. One may then ask: if a set were partitioned randomly, what would the distribution of probabilities be? What would the expectation value of the mutual information be?"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "The use of Beta distributions in Bayesian inference is due to the fact that they provide a family of conjugate prior probability distributions for binomial (including Bernoulli) and geometric distributions. The domain of the beta distribution can be viewed as a probability, and in fact the beta distribution is often used to describe the distribution of a probability value p:"}, {"text": "If f is a probability density function, then the value of the integral above is called the n-th moment of the probability distribution. More generally, if F is a cumulative probability distribution function of any probability distribution, which may not have a density function, then the n-th moment of the probability distribution is given by the Riemann\u2013Stieltjes integral"}, {"text": "If more than one random variable is defined in a random experiment, it is important to distinguish between the joint probability distribution of X and Y and the probability distribution of each variable individually. The individual probability distribution of a random variable is referred to as its marginal probability distribution. In general, the marginal probability distribution of X can be determined from the joint probability distribution of X and other random variables."}, {"text": "where the true error variance \u03c32 is replaced by an estimate based on the minimized value of the sum of squares objective function S. The denominator, n \u2212 m, is the statistical degrees of freedom; see effective degrees of freedom for generalizations.If the probability distribution of the parameters is known or an asymptotic approximation is made, confidence limits can be found. Similarly, statistical tests on the residuals can be conducted if the probability distribution of the residuals is known or assumed. We can derive the probability distribution of any linear combination of the dependent variables if the probability distribution of experimental errors is known or assumed."}, {"text": "where the true error variance \u03c32 is replaced by an estimate based on the minimized value of the sum of squares objective function S. The denominator, n \u2212 m, is the statistical degrees of freedom; see effective degrees of freedom for generalizations.If the probability distribution of the parameters is known or an asymptotic approximation is made, confidence limits can be found. Similarly, statistical tests on the residuals can be conducted if the probability distribution of the residuals is known or assumed. We can derive the probability distribution of any linear combination of the dependent variables if the probability distribution of experimental errors is known or assumed."}]}, {"question": "Is logistic regression a generalized linear model", "positive_ctxs": [{"text": "The short answer is: Logistic regression is considered a generalized linear model because the outcome always depends on the sum of the inputs and parameters. Or in other words, the output cannot depend on the product (or quotient, etc.)"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A probit model is a popular specification for a binary response model. As such it treats the same set of problems as does logistic regression using similar techniques. When viewed in the generalized linear model framework, the probit model employs a probit link function."}, {"text": "A probit model is a popular specification for a binary response model. As such it treats the same set of problems as does logistic regression using similar techniques. When viewed in the generalized linear model framework, the probit model employs a probit link function."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Logistic regression measures the relationship between the categorical dependent variable and one or more independent variables by estimating probabilities using a logistic function, which is the cumulative distribution function of logistic distribution. Thus, it treats the same set of problems as probit regression using similar techniques, with the latter using a cumulative normal distribution curve instead. Equivalently, in the latent variable interpretations of these two methods, the first assumes a standard logistic distribution of errors and the second a standard normal distribution of errors.Logistic regression can be seen as a special case of the generalized linear model and thus analogous to linear regression."}, {"text": "Logistic regression measures the relationship between the categorical dependent variable and one or more independent variables by estimating probabilities using a logistic function, which is the cumulative distribution function of logistic distribution. Thus, it treats the same set of problems as probit regression using similar techniques, with the latter using a cumulative normal distribution curve instead. Equivalently, in the latent variable interpretations of these two methods, the first assumes a standard logistic distribution of errors and the second a standard normal distribution of errors.Logistic regression can be seen as a special case of the generalized linear model and thus analogous to linear regression."}, {"text": "Logistic regression measures the relationship between the categorical dependent variable and one or more independent variables by estimating probabilities using a logistic function, which is the cumulative distribution function of logistic distribution. Thus, it treats the same set of problems as probit regression using similar techniques, with the latter using a cumulative normal distribution curve instead. Equivalently, in the latent variable interpretations of these two methods, the first assumes a standard logistic distribution of errors and the second a standard normal distribution of errors.Logistic regression can be seen as a special case of the generalized linear model and thus analogous to linear regression."}]}, {"question": "What is statistical power in research", "positive_ctxs": [{"text": "Statistical power, or the power of a hypothesis test is the probability that the test correctly rejects the null hypothesis. That is, the probability of a true positive result. statistical power is the probability that a test will correctly reject a false null hypothesis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is more there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.A simple example of modus ponens often used in introductory logic books is \"If you are human then you are mortal\". This can be represented in pseudocode as:"}, {"text": "In the traditional language of statistical hypothesis testing, the sensitivity of a test is called the statistical power of the test, although the word power in that context has a more general usage that is not applicable in the present context. A sensitive test will have fewer Type II errors."}, {"text": "In the traditional language of statistical hypothesis testing, the sensitivity of a test is called the statistical power of the test, although the word power in that context has a more general usage that is not applicable in the present context. A sensitive test will have fewer Type II errors."}, {"text": "In the traditional language of statistical hypothesis testing, the sensitivity of a test is called the statistical power of the test, although the word power in that context has a more general usage that is not applicable in the present context. A sensitive test will have fewer Type II errors."}, {"text": "In the traditional language of statistical hypothesis testing, the sensitivity of a test is called the statistical power of the test, although the word power in that context has a more general usage that is not applicable in the present context. A sensitive test will have fewer Type II errors."}, {"text": "In the traditional language of statistical hypothesis testing, the sensitivity of a test is called the statistical power of the test, although the word power in that context has a more general usage that is not applicable in the present context. A sensitive test will have fewer Type II errors."}, {"text": "Power analysis can either be done before (a priori or prospective power analysis) or after (post hoc or retrospective power analysis) data are collected. A priori power analysis is conducted prior to the research study, and is typically used in estimating sufficient sample sizes to achieve adequate power. Post-hoc analysis of \"observed power\" is conducted after a study has been completed, and uses the obtained sample size and effect size to determine what the power was in the study, assuming the effect size in the sample is equal to the effect size in the population."}]}, {"question": "What does statistical inference take into account", "positive_ctxs": [{"text": "Statistical inference involves hypothesis testing (evaluating some idea about a population using a sample) and estimation (estimating the value or potential range of values of some characteristic of the population based on that of a sample)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "SUTVA violation makes causal inference more difficult. We can account for dependent observations by considering more treatments. We create 4 treatments by taking into account whether or not Mary receives treatment."}, {"text": "Very often, in questionnaires, the questions are structured in several issues. In the statistical analysis it is necessary to take into account this structure. This is the aim of multiple factor analysis which balances the different issues (i.e."}, {"text": "The joint-probability of agreement is the simplest and the least robust measure. It is estimated as the percentage of the time the raters agree in a nominal or categorical rating system. It does not take into account the fact that agreement may happen solely based on chance."}, {"text": "if the average therapy-induced death rate is greater than the baseline proliferation rate, then there is the eradication of the disease. Of course, this is an oversimplified model of both the growth and the therapy (e.g. it does not take into account the phenomenon of clonal resistance)."}, {"text": "if the average therapy-induced death rate is greater than the baseline proliferation rate, then there is the eradication of the disease. Of course, this is an oversimplified model of both the growth and the therapy (e.g. it does not take into account the phenomenon of clonal resistance)."}, {"text": "When used on diseased patients, all patients test positive, giving the test 100% sensitivity. However, sensitivity does not take into account false positives. The bogus test also returns positive on all healthy patients, giving it a false positive rate of 100%, rendering it useless for detecting or \"ruling in\" the disease."}, {"text": "When used on diseased patients, all patients test positive, giving the test 100% sensitivity. However, sensitivity does not take into account false positives. The bogus test also returns positive on all healthy patients, giving it a false positive rate of 100%, rendering it useless for detecting or \"ruling in\" the disease."}]}, {"question": "What is meant by learning rate", "positive_ctxs": [{"text": "In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. In the adaptive control literature, the learning rate is commonly referred to as gain."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "If so much research had already been done on learning from films, what exactly did programmed learning add? The short answer is \"stimulus control\", by which is broadly meant the teaching material itself. Also, in programmed learning, a complete system was proposed which included these stages:"}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}]}, {"question": "How can we avoid misleading statistics", "positive_ctxs": [{"text": "5 Ways to Avoid Being Fooled By Statistics. Do A Little Bit of Math and apply Common Sense. Always Look for the Source and check the authority of the source. Question if the statistics are biased or statistically insignificant. Question if the statistics are skewed purposely or Misinterpreted.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}]}, {"question": "When would you use multinomial regression", "positive_ctxs": [{"text": "Multinomial logistic regression (often just called 'multinomial regression') is used to predict a nominal dependent variable given one or more independent variables. It is sometimes considered an extension of binomial logistic regression to allow for a dependent variable with more than two categories."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "It is also possible to formulate multinomial logistic regression as a latent variable model, following the two-way latent variable model described for binary logistic regression. This formulation is common in the theory of discrete choice models, and makes it easier to compare multinomial logistic regression to the related multinomial probit model, as well as to extend it to more complex models."}, {"text": "It is also possible to formulate multinomial logistic regression as a latent variable model, following the two-way latent variable model described for binary logistic regression. This formulation is common in the theory of discrete choice models, and makes it easier to compare multinomial logistic regression to the related multinomial probit model, as well as to extend it to more complex models."}, {"text": "In such a situation, ordinary least squares (the basic regression technique) is widely seen as inadequate; instead probit regression or logistic regression is used. Further, sometimes there are three or more categories for the dependent variable \u2014 for example, no charges, charges, and death sentences. In this case, the multinomial probit or multinomial logit technique is used."}, {"text": "In the example above, the MCC score would be undefined (since TN and FN would be 0, therefore the denominator of Equation 3 would be 0). By checking this value, instead of accuracy and F1 score, you would then be able to notice that your classifier is going in the wrong direction, and you would become aware that there are issues you ought to solve before proceeding."}, {"text": "To calculate decimal odds, you can use the equation Return = Initial Wager x Decimal Value. For example, if you bet \u20ac100 on Liverpool to beat Manchester City at 2.00 odds you would win \u20ac200 (\u20ac100 x 2.00). Decimal odds are favoured by betting exchanges because they are the easiest to work with for trading, as they reflect the inverse of the probability of an outcome."}]}, {"question": "What do you think is important in a machine learning Pipeline", "positive_ctxs": [{"text": "Getting Familiar with ML Pipelines A machine learning pipeline is used to help automate machine learning workflows. They operate by enabling a sequence of data to be transformed and correlated together in a model that can be tested and evaluated to achieve an outcome, whether positive or negative."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "\"You cannot legitimately test a hypothesis on the same data that first suggested that hypothesis. Once you have a hypothesis, design a study to search specifically for the effect you now think is there. If the result of this test is statistically significant, you have real evidence at last.\""}, {"text": "These values lead to the following performance scores: accuracy = 95%, and F1 score = 97.44%. By reading these over-optimistic scores, then you will be very happy and will think that your machine learning algorithm is doing an excellent job. Obviously, you would be on the wrong track."}, {"text": "The comments should encourage the student to think about the effects of his or her actions on others\u2014-a strategy that in effect encourages the student to consider the ethical implications of the actions (Gibbs, 2003). Instead of simply saying, \"When you cut in line ahead of the other kids, that was not fair to them\", the teacher can try asking, \"How do you think the other kids feel when you cut in line ahead of them?\""}, {"text": "During a 2016 Wired interview of President Barack Obama and MIT Media Lab's Joi Ito, Ito stated: There are a few people who believe that there is a fairly high-percentage chance that a generalized AI will happen in the next 10 years. But the way I look at it is that in order for that to happen, we're going to need a dozen or two different breakthroughs. So you can monitor when you think these breakthroughs will happen."}, {"text": "What is more there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.A simple example of modus ponens often used in introductory logic books is \"If you are human then you are mortal\". This can be represented in pseudocode as:"}, {"text": "Some examples of successful entrepreneurs that have used bootstrapping in order to finance their businesses is serial entrepreneur Mark Cuban. He has publicly endorsed bootstrapping claiming that \u201cIf you can start on your own \u2026 do it by [yourself] without having to go out and raise money.\u201d When asked why he believed this approach was most necessary, he replied, \u201cI think the biggest mistake people make is once they have an idea and the goal of starting a business, they think they have to raise money. And once you raise money, that\u2019s not an accomplishment, that\u2019s an obligation\u201d because \u201cnow, you\u2019re reporting to whoever you raised money from.\u201d"}]}, {"question": "Is RMSE and standard error same", "positive_ctxs": [{"text": "In an analogy to standard deviation, taking the square root of MSE yields the root-mean-square error or root-mean-square deviation (RMSE or RMSD), which has the same units as the quantity being estimated; for an unbiased estimator, the RMSE is the square root of the variance, known as the standard error."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In an analogy to standard deviation, taking the square root of MSE yields the root-mean-square error or root-mean-square deviation (RMSE or RMSD), which has the same units as the quantity being estimated; for an unbiased estimator, the RMSE is the square root of the variance, known as the standard error."}, {"text": "MAE is not identical to RMSE (root-mean square error), but some researchers report and interpret RMSE as if RMSE reflects the measurement that MAE gives. MAE is conceptually simpler and more interpretable than RMSE. MAE does not require the use of squares or square roots."}, {"text": "Furthermore, each error contributes to MAE in proportion to the absolute value of the error, which is not true for RMSE; because RMSE involves squaring the difference between the X and Y, a few large differences will increase the RMSE to a greater degree than the MAE. See the example above for an illustration of these differences."}, {"text": "standard error of that quantity. For the case where the statistic is the sample mean, and samples are uncorrelated, the standard error is:"}, {"text": "The standard deviation of a population or sample and the standard error of a statistic (e.g., of the sample mean) are quite different, but related. The sample mean's standard error is the standard deviation of the set of means that would be found by drawing an infinite number of repeated samples from the population and computing a mean for each sample. The mean's standard error turns out to equal the population standard deviation divided by the square root of the sample size, and is estimated by using the sample standard deviation divided by the square root of the sample size."}, {"text": "The standard deviation of a population or sample and the standard error of a statistic (e.g., of the sample mean) are quite different, but related. The sample mean's standard error is the standard deviation of the set of means that would be found by drawing an infinite number of repeated samples from the population and computing a mean for each sample. The mean's standard error turns out to equal the population standard deviation divided by the square root of the sample size, and is estimated by using the sample standard deviation divided by the square root of the sample size."}, {"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}]}, {"question": "What is preference Learning And how is it different from machine learning", "positive_ctxs": [{"text": "Preference learning is a subfield in machine learning, which is a classification method based on observed preference information. In the view of supervised learning, preference learning trains on a set of items which have preferences toward labels or other items and predicts the preferences for all items."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Educationally, citation or link analysis is important for mapping knowledge domains.The essential idea behind these attempts is the realization that, as data increases, individuals, researchers or business analysts need to understand how to track the underlying patterns behind the data and how to gain insight from them. And this is also a core idea in Learning Analytics.Digitalization of Social network analysis"}, {"text": "Learning from demonstration is often explained from a perspective that the working Robot-control-system is available and the human-demonstrator is using it. And indeed, if the software works, the Human operator takes the robot-arm, makes a move with it, and the robot will reproduce the action later. For example, he teaches the robot-arm how to put a cup under a coffeemaker and press the start-button."}, {"text": "Inferences are split into multiple categories including conclusive, deduction, and induction. In order for an inference to be considered complete it was required that all categories must be taken into account. This is how the ITL varies from other machine learning theories like Computational Learning Theory and Statistical Learning Theory; which both use singular forms of inference."}, {"text": "In the replay phase, the robot is imitating this behavior 1:1. But that is not how the system works internally; it is only what the audience can observe. In reality, Learning from demonstration is much more complex."}, {"text": "Unsupervised learning (UL) is a type of algorithm that learns patterns from untagged data. The hope is that through mimicry, the machine is forced to build a compact internal representation of its world. In contrast to Supervised Learning (SL) where data is tagged by a human, eg."}, {"text": "Unsupervised learning (UL) is a type of algorithm that learns patterns from untagged data. The hope is that through mimicry, the machine is forced to build a compact internal representation of its world. In contrast to Supervised Learning (SL) where data is tagged by a human, eg."}, {"text": "Unsupervised learning (UL) is a type of algorithm that learns patterns from untagged data. The hope is that through mimicry, the machine is forced to build a compact internal representation of its world. In contrast to Supervised Learning (SL) where data is tagged by a human, eg."}]}, {"question": "What is normalized percentage", "positive_ctxs": [{"text": "Normalization basically means bringing all the values to once scale and there is nothing wrong using percentage but there must be a base value for normalizing the data and if you are asking about 100 as a base value and then converting everything as % it will not be equal to normalization as in normalization the base"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Asymmetric lambda measures the percentage improvement in predicting the dependent variable. Symmetric lambda measures the percentage improvement when prediction is done in both directions."}, {"text": "Asymmetric lambda measures the percentage improvement in predicting the dependent variable. Symmetric lambda measures the percentage improvement when prediction is done in both directions."}, {"text": "Asymmetric lambda measures the percentage improvement in predicting the dependent variable. Symmetric lambda measures the percentage improvement when prediction is done in both directions."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}]}, {"question": "When do I approximate Binomial Distribution with Normal vs Poisson", "positive_ctxs": [{"text": "Consider a binomial distribution with parameters (n, p). When n is large and p is small , approximate the probability using Poisson distribution. When n is large and p is close to 0.5, use normal approximation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This in spite of watching several US Navy detailers repeatedly nodding their heads saying 'Yes, that's how I do it' while watching IDA's internal and external actions as she performs her task.\" IDA has been extended to LIDA (Learning Intelligent Distribution Agent)."}, {"text": "The term contingency table was first used by Karl Pearson in \"On the Theory of Contingency and Its Relation to Association and Normal Correlation\", part of the Drapers' Company Research Memoirs Biometric Series I published in 1904."}, {"text": "The term contingency table was first used by Karl Pearson in \"On the Theory of Contingency and Its Relation to Association and Normal Correlation\", part of the Drapers' Company Research Memoirs Biometric Series I published in 1904."}, {"text": "The term contingency table was first used by Karl Pearson in \"On the Theory of Contingency and Its Relation to Association and Normal Correlation\", part of the Drapers' Company Research Memoirs Biometric Series I published in 1904."}, {"text": "The rate of an event is related to the probability of an event occurring in some small subinterval (of time, space or otherwise). In the case of the Poisson distribution, one assumes that there exists a small enough subinterval for which the probability of an event occurring twice is \"negligible\". With this assumption one can derive the Poisson distribution from the Binomial one, given only the information of expected number of total events in the whole interval."}, {"text": "And the reason why ReiserFS is the first journaling filesystem that was integrated in the standard kernel was not because I love Hans Reiser. It was because SUSE actually started shipping with ReiserFS as their standard kernel, which told me \"ok.\" This is actually in production use. Normal People are doing this."}, {"text": "When constructing continuous-time stochastic processes certain mathematical difficulties arise, due to the uncountable index sets, which do not occur with discrete-time processes. One problem is that is it possible to have more than one stochastic process with the same finite-dimensional distributions. For example, both the left-continuous modification and the right-continuous modification of a Poisson process have the same finite-dimensional distributions."}]}, {"question": "What is an intuitive explanation of Markovs inequality", "positive_ctxs": [{"text": "Which intuitively says that the probability of has to be \u201creally high\u201d. In other words, if your value is smaller than E[X], then the upper bound of it taking that value is 1 (basically sort of an uninteresting statement, since you already knew the upper bound was 1 or greater)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Several studies have highlighted the consequences of serial correlation and highlighted the small-cluster problem.In the framework of the Moulton factor, an intuitive explanation of the small cluster problem can be derived from the formula for the Moulton factor. Assume for simplicity that the number of observation per cluster is fixed at n. Below,"}, {"text": "Several studies have highlighted the consequences of serial correlation and highlighted the small-cluster problem.In the framework of the Moulton factor, an intuitive explanation of the small cluster problem can be derived from the formula for the Moulton factor. Assume for simplicity that the number of observation per cluster is fixed at n. Below,"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Abductive validation is the process of validating a given hypothesis through abductive reasoning. This can also be called reasoning through successive approximation. Under this principle, an explanation is valid if it is the best possible explanation of a set of known data."}, {"text": "In mathematics, Bernoulli's inequality (named after Jacob Bernoulli) is an inequality that approximates exponentiations of 1 + x. It is often employed in real analysis."}, {"text": "An explanation of logistic regression can begin with an explanation of the standard logistic function. The logistic function is a sigmoid function, which takes any real input"}, {"text": "An explanation of logistic regression can begin with an explanation of the standard logistic function. The logistic function is a sigmoid function, which takes any real input"}]}, {"question": "Which of the following are the disadvantages of using Knn", "positive_ctxs": [{"text": "Some Disadvantages of KNNAccuracy depends on the quality of the data.With large data, the prediction stage might be slow.Sensitive to the scale of the data and irrelevant features.Require high memory \u2013 need to store all of the training data.Given that it stores all of the training, it can be computationally expensive."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In the physics of gas molecules, the root-mean-square speed is defined as the square root of the average squared-speed. The RMS speed of an ideal gas is calculated using the following equation:"}, {"text": "In the physics of gas molecules, the root-mean-square speed is defined as the square root of the average squared-speed. The RMS speed of an ideal gas is calculated using the following equation:"}, {"text": "In the physics of gas molecules, the root-mean-square speed is defined as the square root of the average squared-speed. The RMS speed of an ideal gas is calculated using the following equation:"}, {"text": "Every instance of the real-world situation (or run of the experiment) must produce exactly one outcome. If outcomes of different runs of an experiment differ in any way that matters, they are distinct outcomes. Which differences matter depends on the kind of analysis we want to do."}, {"text": "More generally, the shape of the resulting curve, especially for very high or low values of the independent variable, may be contrary to commonsense, i.e. to what is known about the experimental system which has generated the data points. These disadvantages can be reduced by using spline interpolation or restricting attention to Chebyshev polynomials."}, {"text": "One of the notorious disadvantages of BoW is that it ignores the spatial relationships among the patches, which are very important in image representation. Researchers have proposed several methods to incorporate the spatial information. For feature level improvements, correlogram features can capture spatial co-occurrences of features."}, {"text": "Another model that was developed to offset the disadvantages of the LPM is the probit model. The probit model uses the same approach to non-linearity as does the logit model; however, it uses the normal CDF instead of the logistic CDF."}]}, {"question": "How do you implement deep learning", "positive_ctxs": [{"text": "Let's GO!Step 0 : Pre-requisites. It is recommended that before jumping on to Deep Learning, you should know the basics of Machine Learning. Step 1 : Setup your Machine. Step 2 : A Shallow Dive. Step 3 : Choose your own Adventure! Step 4 : Deep Dive into Deep Learning."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influence a child's cognitive functioning?). Then you can still do research, but it is not causal, it is correlational."}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}]}, {"question": "What is the problem with convenience sampling", "positive_ctxs": [{"text": "The disadvantages: Convenience samples do not produce representative results. If you need to extrapolate to the target population, convenience samples aren't going to get you there."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Convenience sampling is not often recommended for research due to the possibility of sampling error and lack of representation of population. But it can be handy depending on the situation. In some situations, convenience sampling is the only possible option."}, {"text": "A convenience sample is a type of non-probability sampling method where the sample is taken from a group of people easy to contact or to reach. For example, standing at a mall or a grocery store and asking people to answer questions would be an example of a convenience sample. This type of sampling is also known as grab sampling or availability sampling."}, {"text": "The results of the convenience sampling cannot be generalized to the target population because of the potential bias of the sampling technique due to under-representation of subgroups in the sample in comparison to the population of interest. The bias of the sample cannot be measured. Therefore, inferences based on the convenience sampling should be made only about the sample itself.Power"}, {"text": "One of the most important aspects of convenience sampling is its cost effectiveness. This method allows for funds to be distributed to other aspects of the project. Oftentimes this method of sampling is used to gain funding for a larger, more thorough research project."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Quota sampling is useful when time is limited, a sampling frame is not available, the research budget is very tight or detailed accuracy is not important. Subsets are chosen and then either convenience or judgment sampling is used to choose people from each subset. The researcher decides how many of each category are selected."}, {"text": "When time is of the essence, many researchers turn to convenience sampling for data collection, as they can swiftly gather data and begin their calculations. It is useful in time sensitive research because very little preparation is needed to use convenience sampling for data collection. It is also useful when researchers need to conduct pilot data collection in order to gain a quick understanding of certain trends or to develop hypotheses for future research."}]}, {"question": "How do you prevent Underfitting in machine learning", "positive_ctxs": [{"text": "Techniques to reduce underfitting :Increase model complexity.Increase number of features, performing feature engineering.Remove noise from the data.Increase the number of epochs or increase the duration of training to get better results."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influence a child's cognitive functioning?). Then you can still do research, but it is not causal, it is correlational."}, {"text": "Underfitting occurs when a statistical model or machine learning algorithm cannot adequately capture the underlying structure of the data. It occurs when the model or algorithm does not fit the data enough. Underfitting occurs if the model or algorithm shows low variance but high bias (to contrast the opposite, overfitting from high variance and low bias)."}, {"text": "Underfitting occurs when a statistical model or machine learning algorithm cannot adequately capture the underlying structure of the data. It occurs when the model or algorithm does not fit the data enough. Underfitting occurs if the model or algorithm shows low variance but high bias (to contrast the opposite, overfitting from high variance and low bias)."}, {"text": "Suppose, for example, you have a very imbalanced validation set made of 100 elements, 95 of which are positive elements, and only 5 are negative elements (as explained in Tip 5). And suppose also you made some mistakes in designing and training your machine learning classifier, and now you have an algorithm which always predicts positive. Imagine that you are not aware of this issue."}, {"text": "The following question was posed to Jeff Hawkins in September 2011 with regard to cortical learning algorithms: \"How do you know if the changes you are making to the model are good or not?\" To which Jeff's response was \"There are two categories for the answer: one is to look at neuroscience, and the other is methods for machine intelligence. In the neuroscience realm, there are many predictions that we can make, and those can be tested."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}]}, {"question": "What is discrete and continuous distribution", "positive_ctxs": [{"text": "Control Charts: A discrete distribution is one in which the data can only take on certain values, for example integers. A continuous distribution is one in which data can take on any value within a specified range (which may be infinite)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part."}, {"text": "Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part."}, {"text": "Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part."}, {"text": "Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part."}, {"text": "Another useful measure of entropy that works equally well in the discrete and the continuous case is the relative entropy of a distribution. It is defined as the Kullback\u2013Leibler divergence from the distribution to a reference measure m as follows. Assume that a probability distribution p is absolutely continuous with respect to a measure m, i.e."}, {"text": "The raison d'\u00eatre of the measure-theoretic treatment of probability is that it unifies the discrete and the continuous cases, and makes the difference a question of which measure is used. Furthermore, it covers distributions that are neither discrete nor continuous nor mixtures of the two."}, {"text": "Most generally, every probability distribution on the real line is a mixture of discrete part, singular part, and an absolutely continuous part; see Lebesgue's decomposition theorem \u00a7 Refinement. The discrete part is concentrated on a countable set, but this set may be dense (like the set of all rational numbers)."}]}, {"question": "How do you find x and y variables in regression", "positive_ctxs": [{"text": "In regression analysis, the dependent variable is denoted Y and the independent variable is denoted X. So, in this case, Y=total cholesterol and X=BMI. When there is a single continuous dependent variable and a single independent variable, the analysis is called a simple linear regression analysis ."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "x occurs free in \u2200y \u03c6, if and only if x occurs free in \u03c6 and x is a different symbol from y. Also, x occurs bound in \u2200y \u03c6, if and only if x is y or x occurs bound in \u03c6. The same rule holds with \u2203 in place of \u2200.For example, in \u2200x \u2200y (P(x) \u2192 Q(x,f(x),z)), x and y occur only bound, z occurs only free, and w is neither because it does not occur in the formula."}, {"text": "Convergence questions are treated by considering vector spaces V carrying a compatible topology, a structure that allows one to talk about elements being close to each other. Compatible here means that addition and scalar multiplication have to be continuous maps. Roughly, if x and y in V, and a in F vary by a bounded amount, then so do x + y and ax."}, {"text": "x \u2264 y implies f(x) \u2264 f(y),for all x and y in its domain. The composite of two monotone mappings is also monotone."}, {"text": "Difference, x \u2212 y: The difference of two points x and y is the n-tuple that has ones where x and y differ and zeros elsewhere. It is the bitwise 'exclusive or': x \u2212 y = x \u2295 y. The difference commutes: x \u2212 y = y \u2212 x."}, {"text": "Let S be a vector space or an affine space over the real numbers, or, more generally, over some ordered field. This includes Euclidean spaces, which are affine spaces. A subset C of S is convex if, for all x and y in C, the line segment connecting x and y is included in C. This means that the affine combination (1 \u2212 t)x + ty belongs to C, for all x and y in C, and t in the interval [0, 1]."}, {"text": "Betweenness, x:y:z: Point y is between points x and z if and only if the distance from x to z is the sum of the distances from x to y and from y to z; that is, x:y:z \u21d4 d(x, z) = d(x, y) + d(y, z). It is easily seen that every bit of a point in between is a copy of the corresponding bit of an endpoint."}]}, {"question": "Which is larger average deviation or standard deviation", "positive_ctxs": [{"text": "So standard deviation gives you more deviation than mean deviation whem there are certain data points that are too far from its mean."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The standard deviation of a random variable, sample, statistical population, data set, or probability distribution is the square root of its variance. It is algebraically simpler, though in practice less robust, than the average absolute deviation. A useful property of the standard deviation is that unlike the variance, it is expressed in the same unit as the data."}, {"text": "The standard deviation of a random variable, sample, statistical population, data set, or probability distribution is the square root of its variance. It is algebraically simpler, though in practice less robust, than the average absolute deviation. A useful property of the standard deviation is that unlike the variance, it is expressed in the same unit as the data."}, {"text": "Bimodal distributions are a commonly used example of how summary statistics such as the mean, median, and standard deviation can be deceptive when used on an arbitrary distribution. For example, in the distribution in Figure 1, the mean and median would be about zero, even though zero is not a typical value. The standard deviation is also larger than deviation of each normal distribution."}, {"text": "A data set of [100, 100, 100] has constant values. Its standard deviation is 0 and average is 100, giving the coefficient of variation as"}, {"text": "The linearity of the zROC curve depends on the standard deviations of the target and lure strength distributions. If the standard deviations are equal, the slope will be 1.0. If the standard deviation of the target strength distribution is larger than the standard deviation of the lure strength distribution, then the slope will be smaller than 1.0."}, {"text": "The linearity of the zROC curve depends on the standard deviations of the target and lure strength distributions. If the standard deviations are equal, the slope will be 1.0. If the standard deviation of the target strength distribution is larger than the standard deviation of the lure strength distribution, then the slope will be smaller than 1.0."}, {"text": "The linearity of the zROC curve depends on the standard deviations of the target and lure strength distributions. If the standard deviations are equal, the slope will be 1.0. If the standard deviation of the target strength distribution is larger than the standard deviation of the lure strength distribution, then the slope will be smaller than 1.0."}]}, {"question": "What is a gradient machine learning", "positive_ctxs": [{"text": "Gradient descent is an optimization algorithm that's used when training a machine learning model. It's based on a convex function and tweaks its parameters iteratively to minimize a given function to its local minimum."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In machine learning, the delta rule is a gradient descent learning rule for updating the weights of the inputs to artificial neurons in a single-layer neural network. It is a special case of the more general backpropagation algorithm."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "Does the order of training examples within a minibatch matter when training a neural network", "positive_ctxs": [{"text": "Order of training data during training a neural network matters a great deal. If you are training with a mini batch you may see large fluctuations in accuracy (and cost function) and may end up over fitting correlated portions of your mini batch."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Computational learning theory is concerned with training classifiers on a limited amount of data. In the context of neural networks a simple heuristic, called early stopping, often ensures that the network will generalize well to examples not in the training set."}, {"text": "Computational learning theory is concerned with training classifiers on a limited amount of data. In the context of neural networks a simple heuristic, called early stopping, often ensures that the network will generalize well to examples not in the training set."}, {"text": "For many applications, the training data is less available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain."}, {"text": "For many applications, the training data is less available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain."}, {"text": "For many applications, the training data is less available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain."}, {"text": "For many applications, the training data is less available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain."}, {"text": "For many applications, the training data is less available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain."}]}, {"question": "How do you calculate classification accuracy", "positive_ctxs": [{"text": "Classification accuracy is our starting point. It is the number of correct predictions made divided by the total number of predictions made, multiplied by 100 to turn it into a percentage."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "To calculate decimal odds, you can use the equation Return = Initial Wager x Decimal Value. For example, if you bet \u20ac100 on Liverpool to beat Manchester City at 2.00 odds you would win \u20ac200 (\u20ac100 x 2.00). Decimal odds are favoured by betting exchanges because they are the easiest to work with for trading, as they reflect the inverse of the probability of an outcome."}, {"text": "It may be the case that the accuracy of the assignment on the test set deteriorates, but the accuracy of the classification properties of the tree increases overall."}, {"text": "These values lead to the following performance scores: accuracy = 95%, and F1 score = 97.44%. By reading these over-optimistic scores, then you will be very happy and will think that your machine learning algorithm is doing an excellent job. Obviously, you would be on the wrong track."}]}, {"question": "Is SVM better than logistic regression", "positive_ctxs": [{"text": "SVM tries to finds the \u201cbest\u201d margin (distance between the line and the support vectors) that separates the classes and this reduces the risk of error on the data, while logistic regression does not, instead it can have different decision boundaries with different weights that are near the optimal point."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In a binary logistic regression model, the dependent variable has two levels (categorical). Outputs with more than two values are modeled by multinomial logistic regression and, if the multiple categories are ordered, by ordinal logistic regression (for example the proportional odds ordinal logistic model). The logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier."}, {"text": "In a binary logistic regression model, the dependent variable has two levels (categorical). Outputs with more than two values are modeled by multinomial logistic regression and, if the multiple categories are ordered, by ordinal logistic regression (for example the proportional odds ordinal logistic model). The logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier."}, {"text": "In a binary logistic regression model, the dependent variable has two levels (categorical). Outputs with more than two values are modeled by multinomial logistic regression and, if the multiple categories are ordered, by ordinal logistic regression (for example the proportional odds ordinal logistic model). The logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier."}, {"text": "Conditional logistic regression is more general than the CMH test as it can handle continuous variable and perform multivariate analysis. When the CMH test can be applied, the CMH test statistic and the score test statistic of the conditional logistic regression are identical."}, {"text": "Conditional logistic regression is more general than the CMH test as it can handle continuous variable and perform multivariate analysis. When the CMH test can be applied, the CMH test statistic and the score test statistic of the conditional logistic regression are identical."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}]}, {"question": "What is Arima model used for", "positive_ctxs": [{"text": "An autoregressive integrated moving average, or ARIMA, is a statistical analysis model that uses time series data to either better understand the data set or to predict future trends."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "What is the underlying framework used to represent knowledge? Semantic networks were one of the first knowledge representation primitives. Also, data structures and algorithms for general fast search."}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}]}, {"question": "What is KTH in statistics", "positive_ctxs": [{"text": "In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}]}, {"question": "How does a deep neural network learn", "positive_ctxs": [{"text": "In simple terms, deep learning is when ANNs learn from large amounts of data. Similar to how humans learn from experience, a deep learning algorithm performs a task repeatedly, each time tweaking it slightly to improve the outcome."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Another class of model-free deep reinforcement learning algorithms rely on dynamic programming, inspired by temporal difference learning and Q-learning. In discrete action spaces, these algorithms usually learn a neural network Q-function"}, {"text": "Similar ideas have been used in feed-forward neural networks for unsupervised pre-training to structure a neural network, making it first learn generally useful feature detectors. Then the network is trained further by supervised backpropagation to classify labeled data. The deep belief network model by Hinton et al."}, {"text": "Similar ideas have been used in feed-forward neural networks for unsupervised pre-training to structure a neural network, making it first learn generally useful feature detectors. Then the network is trained further by supervised backpropagation to classify labeled data. The deep belief network model by Hinton et al."}, {"text": "A deep Q-network (DQN) is a type of deep learning model that combines a deep neural network with Q-learning, a form of reinforcement learning. Unlike earlier reinforcement learning agents, DQNs that utilize CNNs can learn directly from high-dimensional sensory inputs via reinforcement learning.Preliminary results were presented in 2014, with an accompanying paper in February 2015. The research described an application to Atari 2600 gaming."}, {"text": "A deep Q-network (DQN) is a type of deep learning model that combines a deep neural network with Q-learning, a form of reinforcement learning. Unlike earlier reinforcement learning agents, DQNs that utilize CNNs can learn directly from high-dimensional sensory inputs via reinforcement learning.Preliminary results were presented in 2014, with an accompanying paper in February 2015. The research described an application to Atari 2600 gaming."}, {"text": "A deep Q-network (DQN) is a type of deep learning model that combines a deep neural network with Q-learning, a form of reinforcement learning. Unlike earlier reinforcement learning agents, DQNs that utilize CNNs can learn directly from high-dimensional sensory inputs via reinforcement learning.Preliminary results were presented in 2014, with an accompanying paper in February 2015. The research described an application to Atari 2600 gaming."}, {"text": "A deep Q-network (DQN) is a type of deep learning model that combines a deep neural network with Q-learning, a form of reinforcement learning. Unlike earlier reinforcement learning agents, DQNs that utilize CNNs can learn directly from high-dimensional sensory inputs via reinforcement learning.Preliminary results were presented in 2014, with an accompanying paper in February 2015. The research described an application to Atari 2600 gaming."}]}, {"question": "What is sampling error and how can it be reduced", "positive_ctxs": [{"text": "Sampling errors can be reduced by the following methods: (1) by increasing the size of the sample (2) by stratification. Increasing the size of the sample: The sampling error can be reduced by increasing the sample size. If the sample size n is equal to the population size N, then the sampling error is zero."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Convenience sampling is not often recommended for research due to the possibility of sampling error and lack of representation of population. But it can be handy depending on the situation. In some situations, convenience sampling is the only possible option."}, {"text": "Random error which may vary from observation to another.Systematic error is sometimes called statistical bias. It may often be reduced with standardized procedures. Part of the learning process in the various sciences is learning how to use standard instruments and protocols so as to minimize systematic error."}, {"text": "Random error which may vary from observation to another.Systematic error is sometimes called statistical bias. It may often be reduced with standardized procedures. Part of the learning process in the various sciences is learning how to use standard instruments and protocols so as to minimize systematic error."}, {"text": "Random error which may vary from observation to another.Systematic error is sometimes called statistical bias. It may often be reduced with standardized procedures. Part of the learning process in the various sciences is learning how to use standard instruments and protocols so as to minimize systematic error."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}]}, {"question": "What is Kruskal Wallis test used for", "positive_ctxs": [{"text": "The Kruskal-Wallis H test (sometimes also called the \"one-way ANOVA on ranks\") is a rank-based nonparametric test that can be used to determine if there are statistically significant differences between two or more groups of an independent variable on a continuous or ordinal dependent variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The Kruskal\u2013Wallis test by ranks, Kruskal\u2013Wallis H test (named after William Kruskal and W. Allen Wallis), or one-way ANOVA on ranks is a non-parametric method for testing whether samples originate from the same distribution. It is used for comparing two or more independent samples of equal or different sample sizes. It extends the Mann\u2013Whitney U test, which is used for comparing only two groups."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "The following argument is the result of a suggestion made by Graham Wallis to E. T. Jaynes in 1962. It is essentially the same mathematical argument used for the Maxwell\u2013Boltzmann statistics in statistical mechanics, although the conceptual emphasis is quite different. It has the advantage of being strictly combinatorial in nature, making no reference to information entropy as a measure of 'uncertainty', 'uninformativeness', or any other imprecisely defined concept."}, {"text": "The following argument is the result of a suggestion made by Graham Wallis to E. T. Jaynes in 1962. It is essentially the same mathematical argument used for the Maxwell\u2013Boltzmann statistics in statistical mechanics, although the conceptual emphasis is quite different. It has the advantage of being strictly combinatorial in nature, making no reference to information entropy as a measure of 'uncertainty', 'uninformativeness', or any other imprecisely defined concept."}, {"text": "What is the underlying framework used to represent knowledge? Semantic networks were one of the first knowledge representation primitives. Also, data structures and algorithms for general fast search."}]}, {"question": "Why use root mean square instead of average", "positive_ctxs": [{"text": "3 Answers. Attempts to find an average value of AC would directly provide you the answer zero Hence, RMS values are used. They help to find the effective value of AC (voltage or current). This RMS is a mathematical quantity (used in many math fields) used to compare both alternating and direct currents (or voltage)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "The use of squared distances hinders the interpretation of RMSE. MAE is simply the average absolute vertical or horizontal distance between each point in a scatter plot and the Y=X line. In other words, MAE is the average absolute difference between X and Y. MAE is fundamentally easier to understand than the square root of the average of the squared deviations."}, {"text": "Physical scientists often use the term root mean square as a synonym for standard deviation when it can be assumed the input signal has zero mean, that is, referring to the square root of the mean squared deviation of a signal from a given baseline or fit. This is useful for electrical engineers in calculating the \"AC only\" RMS of a signal. Standard deviation being the RMS of a signal's variation about the mean, rather than about 0, the DC component is removed (that is, RMS(signal) = stdev(signal) if the mean signal is 0)."}]}, {"question": "Why is the standard p value 0 05", "positive_ctxs": [{"text": "A p-value less than 0.05 (typically \u2264 0.05) is statistically significant. It indicates strong evidence against the null hypothesis, as there is less than a 5% probability the null is correct (and the results are random). Therefore, we reject the null hypothesis, and accept the alternative hypothesis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For p = 0 and p = \u221e these functions are defined by taking limits, respectively as p \u2192 0 and p \u2192 \u221e. For p = 0 the limiting values are 00 = 0 and a0 = 0 or a \u2260 0, so the difference becomes simply equality, so the 0-norm counts the number of unequal points. For p = \u221e the largest number dominates, and thus the \u221e-norm is the maximum difference."}, {"text": "For p = 0 and p = \u221e these functions are defined by taking limits, respectively as p \u2192 0 and p \u2192 \u221e. For p = 0 the limiting values are 00 = 0 and a0 = 0 or a \u2260 0, so the difference becomes simply equality, so the 0-norm counts the number of unequal points. For p = \u221e the largest number dominates, and thus the \u221e-norm is the maximum difference."}, {"text": "For p = 0 and p = \u221e these functions are defined by taking limits, respectively as p \u2192 0 and p \u2192 \u221e. For p = 0 the limiting values are 00 = 0 and a0 = 0 or a \u2260 0, so the difference becomes simply equality, so the 0-norm counts the number of unequal points. For p = \u221e the largest number dominates, and thus the \u221e-norm is the maximum difference."}, {"text": "The third is zero when p = \u200b49\u204480. The solution that maximizes the likelihood is clearly p = \u200b49\u204480 (since p = 0 and p = 1 result in a likelihood of 0). Thus the maximum likelihood estimator for p is \u200b49\u204480."}, {"text": "The third is zero when p = \u200b49\u204480. The solution that maximizes the likelihood is clearly p = \u200b49\u204480 (since p = 0 and p = 1 result in a likelihood of 0). Thus the maximum likelihood estimator for p is \u200b49\u204480."}, {"text": "The third is zero when p = \u200b49\u204480. The solution that maximizes the likelihood is clearly p = \u200b49\u204480 (since p = 0 and p = 1 result in a likelihood of 0). Thus the maximum likelihood estimator for p is \u200b49\u204480."}, {"text": "However, when (n + 1)p is an integer and p is neither 0 nor 1, then the distribution has two modes: (n + 1)p and (n + 1)p \u2212 1. When p is equal to 0 or 1, the mode will be 0 and n correspondingly. These cases can be summarized as follows:"}]}, {"question": "What is a gradient in neural network", "positive_ctxs": [{"text": "The most used algorithm to train neural networks is gradient descent. We'll define it later, but for now hold on to the following idea: the gradient is a numeric calculation allowing us to know how to adjust the parameters of a network in such a way that its output deviation is minimized."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The delta rule is derived by attempting to minimize the error in the output of the neural network through gradient descent. The error for a neural network with"}, {"text": "LeNet is a convolutional neural network structure proposed by Yann LeCun et al. In general, LeNet refers to lenet-5 and is a simple convolutional neural network. Convolutional neural networks are a kind of feed-forward neural network whose artificial neurons can respond to a part of the surrounding cells in the coverage range and perform well in large-scale image processing."}, {"text": "A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus a neural network is either a biological neural network, made up of real biological neurons, or an artificial neural network, for solving artificial intelligence (AI) problems. The connections of the biological neuron are modeled as weights."}, {"text": "The model is initially fit on a training dataset, which is a set of examples used to fit the parameters (e.g. weights of connections between neurons in artificial neural networks) of the model. a neural net or a naive Bayes classifier) is trained on the training dataset using a supervised learning method, for example using optimization methods such as gradient descent or stochastic gradient descent."}, {"text": "The model is initially fit on a training dataset, which is a set of examples used to fit the parameters (e.g. weights of connections between neurons in artificial neural networks) of the model. a neural net or a naive Bayes classifier) is trained on the training dataset using a supervised learning method, for example using optimization methods such as gradient descent or stochastic gradient descent."}, {"text": "The model is initially fit on a training dataset, which is a set of examples used to fit the parameters (e.g. weights of connections between neurons in artificial neural networks) of the model. a neural net or a naive Bayes classifier) is trained on the training dataset using a supervised learning method, for example using optimization methods such as gradient descent or stochastic gradient descent."}, {"text": "The model is initially fit on a training dataset, which is a set of examples used to fit the parameters (e.g. weights of connections between neurons in artificial neural networks) of the model. a neural net or a naive Bayes classifier) is trained on the training dataset using a supervised learning method, for example using optimization methods such as gradient descent or stochastic gradient descent."}]}, {"question": "What is meant by supervised machine learning", "positive_ctxs": [{"text": "Supervised learning is the machine learning task of learning a function that maps an input to an output based on example input-output pairs. It infers a function from labeled training data consisting of a set of training examples."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Statistical classification is a problem studied in machine learning. It is a type of supervised learning, a method of machine learning where the categories are predefined, and is used to categorize new probabilistic observations into said categories. When there are only two categories the problem is known as statistical binary classification."}, {"text": "In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. A binary classifier is a function which can decide whether or not an input, represented by a vector of numbers, belongs to some specific class. It is a type of linear classifier, i.e."}, {"text": "In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. A binary classifier is a function which can decide whether or not an input, represented by a vector of numbers, belongs to some specific class. It is a type of linear classifier, i.e."}]}, {"question": "What is meant by quota sampling", "positive_ctxs": [{"text": "Definition: Quota sampling is a sampling methodology wherein data is collected from a homogeneous group. It involves a two-step process where two variables can be used to filter information from the population. It can easily be administered and helps in quick comparison."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "It is this second step which makes the technique one of non-probability sampling. In quota sampling the selection of the sample is non-random. For example, interviewers might be tempted to interview those who look most helpful."}, {"text": "It is this second step which makes the technique one of non-probability sampling. In quota sampling the selection of the sample is non-random. For example, interviewers might be tempted to interview those who look most helpful."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Quota sampling is the non-probability version of stratified sampling. In stratified sampling, subsets of the population are created so that each subset has a common characteristic, such as gender. Random sampling chooses a number of subjects from each subset with, unlike a quota sample, each potential subject having a known probability of being selected."}, {"text": "Nonprobability sampling methods include convenience sampling, quota sampling and purposive sampling. In addition, nonresponse effects may turn any probability design into a nonprobability design if the characteristics of nonresponse are not well understood, since nonresponse effectively modifies each element's probability of being sampled."}, {"text": "Nonprobability sampling methods include convenience sampling, quota sampling and purposive sampling. In addition, nonresponse effects may turn any probability design into a nonprobability design if the characteristics of nonresponse are not well understood, since nonresponse effectively modifies each element's probability of being sampled."}]}, {"question": "What is the meaning of international communication", "positive_ctxs": [{"text": "International communication (also referred to as the study of global communication or transnational communication) is the communication practice that occurs across international borders. International communication \"encompasses political, economic, social, cultural and military concerns\"."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In this way, an interpretation provides semantic meaning to the terms, the predicates, and formulas of the language. The study of the interpretations of formal languages is called formal semantics. What follows is a description of the standard or Tarskian semantics for first-order logic."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Though the exact definition varies between scholars, natural language can broadly be defined in contrast to artificial or constructed languages (such as computer programming languages and international auxiliary languages) and to other communication systems in nature. Examples of such communication systems include bees' waggle dance and whale song, to which researchers have found or applied the linguistic cognates of dialect and even syntax. However, classification of animal communication systems as languages is controversial.All language varieties of world languages are natural languages, although some varieties are subject to greater degrees of published prescriptivism or language regulation than others."}, {"text": "The global poverty line is a worldwide count of people who live below an international poverty line, referred to as the dollar-a-day line. This line represents an average of the national poverty lines of the world's poorest countries, expressed in international dollars. These national poverty lines are converted to international currency and the global line is converted back to local currency using the PPP exchange rates from the ICP."}, {"text": "Markov sources are commonly used in communication theory, as a model of a transmitter. Markov sources also occur in natural language processing, where they are used to represent hidden meaning in a text. Given the output of a Markov source, whose underlying Markov chain is unknown, the task of solving for the underlying chain is undertaken by the techniques of hidden Markov models, such as the Viterbi algorithm."}, {"text": "The meaning of the events observed (the meaning of messages) does not matter in the definition of entropy. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves."}, {"text": "The direct role of the European Union (and also the law of the EU/EC) in the area of protection of national minorities is still very limited (likewise the general protection of human rights). The EU has relied on general international law and a European regional system of international law (based on the Council of Europe, Organization for Security and Co-operation in Europe, etc.) and in a case of necessity accepted their norms."}]}, {"question": "What is K means algorithm in machine learning", "positive_ctxs": [{"text": "K-means clustering is one of the simplest and popular unsupervised machine learning algorithms. In other words, the K-means algorithm identifies k number of centroids, and then allocates every data point to the nearest cluster, while keeping the centroids as small as possible."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Stability, also known as algorithmic stability, is a notion in computational learning theory of how a machine learning algorithm is perturbed by small changes to its inputs. A stable learning algorithm is one for which the prediction does not change much when the training data is modified slightly. For instance, consider a machine learning algorithm that is being trained to recognize handwritten letters of the alphabet, using 1000 examples of handwritten letters and their labels (\"A\" to \"Z\") as a training set."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "What is the purpose of a neural network", "positive_ctxs": [{"text": "The purpose of a neural network is to learn to recognize patterns in your data. Once the neural network has been trained on samples of your data, it can make predictions by detecting similar patterns in future data. Software that learns is truly \"Artificial Intelligence\"."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Historically, the most common type of neural network software was intended for researching neural network structures and algorithms. The primary purpose of this type of software is, through simulation, to gain a better understanding of the behavior and the properties of neural networks. Today in the study of artificial neural networks, simulators have largely been replaced by more general component based development environments as research platforms."}, {"text": "is a set of weights. The optimization problem of finding alpha is readily solved through neural networks, hence a \"meta-network\" where each \"neuron\" is in fact an entire neural network can be trained, and the synaptic weights of the final network is the weight applied to each expert. This is known as a linear combination of experts.It can be seen that most forms of neural networks are some subset of a linear combination: the standard neural net (where only one expert is used) is simply a linear combination with all"}, {"text": "in its hinge loss-style formulation. It is often used for learning similarity for the purpose of learning embeddings, such as learning to rank, word embeddings, thought vectors, and metric learning.Consider the task of training a neural network to recognize faces (e.g. for admission to a high security zone)."}, {"text": "A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus a neural network is either a biological neural network, made up of real biological neurons, or an artificial neural network, for solving artificial intelligence (AI) problems. The connections of the biological neuron are modeled as weights."}, {"text": "An autoencoder is a feed-forward neural network which is trained to approximate the identity function. That is, it is trained to map from a vector of values to the same vector. When used for dimensionality reduction purposes, one of the hidden layers in the network is limited to contain only a small number of network units."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}]}, {"question": "What is the difference between object and instance", "positive_ctxs": [{"text": "Object is a copy of the class. Instance is a variable that holds the memory address of the object. You can also have multiple objects of the same class and then multiple instances of each of those objects. In these cases, each object's set of instances are equivalent in value, but the instances between objects are not."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In psychophysical terms, the size difference between A and C is above the just noticeable difference ('jnd') while the size differences between A and B and B and C are below the jnd."}, {"text": "In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century. In the context of economics, for example, this is usually economic cost or regret."}, {"text": "In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century. In the context of economics, for example, this is usually economic cost or regret."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "Alternatively, these scores may be applied as feature weights to guide downstream modeling. Relief feature scoring is based on the identification of feature value differences between nearest neighbor instance pairs. If a feature value difference is observed in a neighboring instance pair with the same class (a 'hit'), the feature score decreases."}, {"text": "the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration."}]}, {"question": "How do you create a generative adversarial network", "positive_ctxs": [{"text": "GAN Training Step 1 \u2014 Select a number of real images from the training set. Step 2 \u2014 Generate a number of fake images. This is done by sampling random noise vectors and creating images from them using the generator. Step 3 \u2014 Train the discriminator for one or more epochs using both fake and real images."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "The generative network generates candidates while the discriminative network evaluates them. The contest operates in terms of data distributions. Typically, the generative network learns to map from a latent space to a data distribution of interest, while the discriminative network distinguishes candidates produced by the generator from the true data distribution."}]}, {"question": "What is data representation in machine learning", "positive_ctxs": [{"text": "In implementing most of the machine learning algorithms, we represent each data point with a feature vector as the input. A vector is basically an array of numerics, or in physics, an object with magnitude and direction."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the underlying framework used to represent knowledge? Semantic networks were one of the first knowledge representation primitives. Also, data structures and algorithms for general fast search."}, {"text": "CCMs have also been applied to latent learning frameworks, where the learning problem is defined over a latent representation layer. Since the notion of a correct representation is inherently ill-defined, no gold-standard labeled data regarding the representation decision is available to the learner. Identifying the correct (or optimal) learning representation is viewed as a structured prediction process and therefore modeled as a CCM."}, {"text": "The origins of data preprocessing are located in data mining. The idea is to aggregate existing information and search in the content. Later it was recognized, that for machine learning and neural networks a data preprocessing step is needed too."}, {"text": "Unsupervised learning (UL) is a type of algorithm that learns patterns from untagged data. The hope is that through mimicry, the machine is forced to build a compact internal representation of its world. In contrast to Supervised Learning (SL) where data is tagged by a human, eg."}, {"text": "Unsupervised learning (UL) is a type of algorithm that learns patterns from untagged data. The hope is that through mimicry, the machine is forced to build a compact internal representation of its world. In contrast to Supervised Learning (SL) where data is tagged by a human, eg."}, {"text": "Unsupervised learning (UL) is a type of algorithm that learns patterns from untagged data. The hope is that through mimicry, the machine is forced to build a compact internal representation of its world. In contrast to Supervised Learning (SL) where data is tagged by a human, eg."}, {"text": "In a typical document classification task, the input to the machine learning algorithm (both during learning and classification) is free text. From this, a bag of words (BOW) representation is constructed: the individual tokens are extracted and counted, and each distinct token in the training set defines a feature (independent variable) of each of the documents in both the training and test sets."}]}, {"question": "What is the relationship between precision and accuracy", "positive_ctxs": [{"text": "In other words, accuracy describes the difference between the measurement and the part's actual value, while precision describes the variation you see when you measure the same part repeatedly with the same device."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If the parameter is the bull's-eye of a target, and the arrows are estimates, then a relatively high absolute value for the bias means the average position of the arrows is off-target, and a relatively low absolute bias means the average position of the arrows is on target. They may be dispersed, or may be clustered. The relationship between bias and variance is analogous to the relationship between accuracy and precision."}, {"text": "In the fields of science and engineering, the accuracy of a measurement system is the degree of closeness of measurements of a quantity to that quantity's true value. The precision of a measurement system, related to reproducibility and repeatability, is the degree to which repeated measurements under unchanged conditions show the same results. Although the two words precision and accuracy can be synonymous in colloquial use, they are deliberately contrasted in the context of the scientific method."}, {"text": "In the fields of science and engineering, the accuracy of a measurement system is the degree of closeness of measurements of a quantity to that quantity's true value. The precision of a measurement system, related to reproducibility and repeatability, is the degree to which repeated measurements under unchanged conditions show the same results. Although the two words precision and accuracy can be synonymous in colloquial use, they are deliberately contrasted in the context of the scientific method."}, {"text": "Often, there is an inverse relationship between precision and recall, where it is possible to increase one at the cost of reducing the other. Brain surgery provides an illustrative example of the tradeoff. Consider a brain surgeon removing a cancerous tumor from a patient\u2019s brain."}, {"text": "Often, there is an inverse relationship between precision and recall, where it is possible to increase one at the cost of reducing the other. Brain surgery provides an illustrative example of the tradeoff. Consider a brain surgeon removing a cancerous tumor from a patient\u2019s brain."}, {"text": "Often, there is an inverse relationship between precision and recall, where it is possible to increase one at the cost of reducing the other. Brain surgery provides an illustrative example of the tradeoff. Consider a brain surgeon removing a cancerous tumor from a patient\u2019s brain."}, {"text": "Often, there is an inverse relationship between precision and recall, where it is possible to increase one at the cost of reducing the other. Brain surgery provides an illustrative example of the tradeoff. Consider a brain surgeon removing a cancerous tumor from a patient\u2019s brain."}]}, {"question": "How do you find the uncertainty of a measurement", "positive_ctxs": [{"text": "To find the average, add them together and divide by the number of values (10 in this case). When repeated measurements give different results, we want to know how widely spread the readings are. The spread of values tells us something about the uncertainty of a measurement."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In metrology, measurement uncertainty is a central concept quantifying the dispersion one may reasonably attribute to a measurement result. Such an uncertainty can also be referred to as a measurement error. In daily life, measurement uncertainty is often implicit (\"He is 6 feet tall\" give or take a few inches), while for any serious use an explicit statement of the measurement uncertainty is necessary."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Type B, those evaluated by other means, e.g., by assigning a probability distributionBy propagating the variances of the components through a function relating the components to the measurement result, the combined measurement uncertainty is given as the square root of the resulting variance. The simplest form is the standard deviation of a repeated observation."}, {"text": "Uncertainty of a measurement can be determined by repeating a measurement to arrive at an estimate of the standard deviation of the values. Then, any single value has an uncertainty equal to the standard deviation. However, if the values are averaged, then the mean measurement value has a much smaller uncertainty, equal to the standard error of the mean, which is the standard deviation divided by the square root of the number of measurements."}, {"text": "The argument so far has glossed over the question of fluctuations. It has also implicitly assumed that the uncertainty predicted at time t1 for the variables at time t2 will be much smaller than the measurement error. But if the measurements do meaningfully update our knowledge of the system, our uncertainty as to its state is reduced, giving a new SI(2) which is less than SI(1)."}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}]}, {"question": "What is maximum entropy model in NLP", "positive_ctxs": [{"text": "The maximum entropy principle is defined as modeling a given set of data by finding the highest entropy to satisfy the constraints of our prior knowledge. The maximum entropy model is a conditional probability model p(y|x) that allows us to predict class labels given a set of features for a given data point."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, a maximum-entropy Markov model (MEMM), or conditional Markov model (CMM), is a graphical model for sequence labeling that combines features of hidden Markov models (HMMs) and maximum entropy (MaxEnt) models. An MEMM is a discriminative model that extends a standard maximum entropy classifier by assuming that the unknown values to be learnt are connected in a Markov chain rather than being conditionally independent of each other. MEMMs find applications in natural language processing, specifically in part-of-speech tagging and information extraction."}, {"text": "In maximum entropy modeling, probability distributions are created on the basis of that which is known, leading to a type of statistical inference about the missing information which is called the maximum entropy estimate. For example, in spectral analysis the expected peak shape is often known, but in a noisy spectrum the center of the peak may not be clear. In such a case, inputting the known information allows the maximum entropy model to derive a better estimate of the center of the peak, thus improving spectral accuracy."}, {"text": "Alternatively, the principle is often invoked for model specification: in this case the observed data itself is assumed to be the testable information. Such models are widely used in natural language processing. An example of such a model is logistic regression, which corresponds to the maximum entropy classifier for independent observations."}, {"text": "Alternatively, the principle is often invoked for model specification: in this case the observed data itself is assumed to be the testable information. Such models are widely used in natural language processing. An example of such a model is logistic regression, which corresponds to the maximum entropy classifier for independent observations."}, {"text": "Efficiency has utility in quantifying the effective use of a communication channel. This formulation is also referred to as the normalized entropy, as the entropy is divided by the maximum entropy"}, {"text": "Maximum entropy is a sufficient updating rule for radical probabilism. Richard Jeffrey's probability kinematics is a special case of maximum entropy inference. However, maximum entropy is not a generalisation of all such sufficient updating rules."}, {"text": "Maximum entropy is a sufficient updating rule for radical probabilism. Richard Jeffrey's probability kinematics is a special case of maximum entropy inference. However, maximum entropy is not a generalisation of all such sufficient updating rules."}]}, {"question": "Did hide become a ghoul", "positive_ctxs": [{"text": "Now living under the identity of Scarecrow, Hide helped Koutarou Amon flee from Akihiro Kanou after he was turned into a one-eyed ghoul."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "the RR=0.9796 from above example) can clinically hide and conceal an important doubling of adverse risk associated with a drug or exposure."}, {"text": "Modern languages also generally support modular programming, the separation between the interface of a library module and its implementation. Some provide opaque data types that allow clients to hide implementation details. Object-oriented programming languages, such as C++, Java, and Smalltalk, typically use classes for this purpose."}, {"text": "On the other hand, in VR the surrounding environment is completely virtual. A demonstration of how AR layers objects onto the real world can be seen with augmented reality games. WallaMe is an augmented reality game application that allows users to hide messages in real environments, utilizing geolocation technology in order to enable users to hide messages wherever they may wish in the world."}, {"text": "To hide patterns in encrypted data while avoiding the re-issuing of a new key after each block cipher invocation, a method is needed to randomize the input data. In 1980, the NIST published a national standard document designated Federal Information Processing Standard (FIPS) PUB 81, which specified four so-called block cipher modes of operation, each describing a different solution for encrypting a set of input blocks. The first mode implements the simple strategy described above, and was specified as the electronic codebook (ECB) mode."}, {"text": "Fisher developed significance testing as a flexible tool for researchers to weigh their evidence. Instead testing has become institutionalized. Statistical significance has become a rigidly defined and enforced criterion for the publication of experimental results in many scientific journals."}, {"text": "In the latter case, individuals with a higher fitness have a higher chance to be selected than individuals with a lower fitness, but typically even the weak individuals have a chance to become a parent or to survive."}, {"text": "During the first decade of the century, Professor Caroline Haythornthwaite explored the impact of media type on the development of social ties, observing that human interactions can be analyzed to gain novel insight not from strong interactions (i.e. people that are strongly related to the subject) but, rather, from weak ties. This provides Learning Analytics with a central idea: apparently un-related data may hide crucial information."}]}, {"question": "How knowledge is represented using semantic network", "positive_ctxs": [{"text": "In Semantic networks, we can represent our knowledge in the form of graphical networks. This network consists of nodes representing objects and arcs which describe the relationship between those objects. Semantic networks can categorize the object in different forms and can also link those objects."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A semantic network, or frame network is a knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts, mapping or connecting semantic fields."}, {"text": "In the theory of knowledge spaces it is assumed that in any knowledge space the family of knowledge states is union-closed. The complements of knowledge states therefore form a closure system and may be represented as the extents of some formal context."}, {"text": "In natural language processing and information retrieval, explicit semantic analysis (ESA) is a vectoral representation of text (individual words or entire documents) that uses a document corpus as a knowledge base. Specifically, in ESA, a word is represented as a column vector in the tf\u2013idf matrix of the text corpus and a document (string of words) is represented as the centroid of the vectors representing its words. Typically, the text corpus is English Wikipedia, though other corpora including the Open Directory Project have been used.ESA was designed by Evgeniy Gabrilovich and Shaul Markovitch as a means of improving text categorization"}, {"text": "Topic modeling is a classic solution to the problem of information retrieval using linked data and semantic web technology. Related models and techniques are, among others, latent semantic indexing, independent component analysis, probabilistic latent semantic indexing, non-negative matrix factorization, and Gamma-Poisson distribution."}, {"text": "Other Gellish networks consist of knowledge models and information models that are expressed in the Gellish language. A Gellish network is a network of (binary) relations between things. Each relation in the network is an expression of a fact that is classified by a relation type."}, {"text": "Modeling multi-relational data like semantic networks in low-dimensional spaces through forms of embedding has benefits in expressing entity relationships as well as extracting relations from mediums like text. There are many approaches to learning these embeddings, notably using Bayesian clustering frameworks or energy-based frameworks, and more recently, TransE (NIPS 2013). Applications of embedding knowledge base data include Social network analysis and Relationship extraction."}, {"text": "Recently automatic reasoners found in semantic web a new field of application. Being based upon description logic, knowledge expressed using one variant of OWL can be logically processed, i.e., inferences can be made upon it."}]}, {"question": "What is padding in RSA encryption", "positive_ctxs": [{"text": "For example RSA Encryption padding is randomized, ensuring that the same message encrypted multiple times looks different each time. It also avoids other weaknesses, such as encrypting the same message using different RSA keys leaking the message, or an attacker creating messages derived from some other ciphertexts."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Block cipher processing of data is usually described as a mode of operation. Modes are primarily defined for encryption as well as authentication, though newer designs exist that combine both security solutions in so-called authenticated encryption modes. While encryption and authenticated encryption modes usually take an IV matching the cipher's block size, authentication modes are commonly realized as deterministic algorithms, and the IV is set to zero or some other fixed value."}, {"text": "There have subsequently been accusations that RSA Security knowingly inserted a NSA backdoor into its products, possibly as part of the Bullrun program. RSA has denied knowingly inserting a backdoor into its products.It has also been theorized that hardware RNGs could be secretly modified to have less entropy than stated, which would make encryption using the hardware RNG susceptible to attack. One such method which has been published works by modifying the dopant mask of the chip, which would be undetectable to optical reverse-engineering."}, {"text": "This unpredictable value is added to the first plaintext block before subsequent encryption. In turn, the ciphertext produced in the first encryption step is added to the second plaintext block, and so on. The ultimate goal for encryption schemes is to provide semantic security: by this property, it is practically impossible for an attacker to draw any knowledge from observed ciphertext."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}, {"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}]}, {"question": "How do you use linear regression to predict stock prices", "positive_ctxs": [{"text": "Predicting Google's Stock Price using Linear RegressionTake a value of x (say x=0)Find the corresponding value of y by putting x=0 in the equation.Store the (x,y) value pair in a table.Repeat the process once or twice or as many times as we want.Plot the points on the graph to obtain the straight line."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The accuracy of prediction of business failure is a very crucial issue in financial decision-making. Therefore, different ensemble classifiers are proposed to predict financial crises and financial distress. Also, in the trade-based manipulation problem, where traders attempt to manipulate stock prices by buying and selling activities, ensemble classifiers are required to analyze the changes in the stock market data and detect suspicious symptom of stock price manipulation."}, {"text": "The accuracy of prediction of business failure is a very crucial issue in financial decision-making. Therefore, different ensemble classifiers are proposed to predict financial crises and financial distress. Also, in the trade-based manipulation problem, where traders attempt to manipulate stock prices by buying and selling activities, ensemble classifiers are required to analyze the changes in the stock market data and detect suspicious symptom of stock price manipulation."}, {"text": "The accuracy of prediction of business failure is a very crucial issue in financial decision-making. Therefore, different ensemble classifiers are proposed to predict financial crises and financial distress. Also, in the trade-based manipulation problem, where traders attempt to manipulate stock prices by buying and selling activities, ensemble classifiers are required to analyze the changes in the stock market data and detect suspicious symptom of stock price manipulation."}, {"text": "Because of the Internet's ability to rapidly convey large amounts of information throughout the world, the use of collective intelligence to predict stock prices and stock price direction has become increasingly viable. Websites aggregate stock market information that is as current as possible so professional or amateur stock analysts can publish their viewpoints, enabling amateur investors to submit their financial opinions and create an aggregate opinion. The opinion of all investor can be weighed equally so that a pivotal premise of the effective application of collective intelligence can be applied: the masses, including a broad spectrum of stock market expertise, can be utilized to more accurately predict the behavior of financial markets.Collective intelligence underpins the efficient-market hypothesis of Eugene Fama \u2013 although the term collective intelligence is not used explicitly in his paper."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "As an example, suppose a linear prediction model learns from some data (perhaps primarily drawn from large beaches) that a 10 degree temperature decrease would lead to 1,000 fewer people visiting the beach. This model is unlikely to generalize well over different sized beaches. More specifically, the problem is that if you use the model to predict the new attendance with a temperature drop of 10 for a beach that regularly receives 50 beachgoers, you would predict an impossible attendance value of \u2212950."}]}, {"question": "What is truncated Bptt", "positive_ctxs": [{"text": "Truncated Backpropagation Through Time (truncated BPTT) is a widespread method for learning recurrent computational graphs. Truncated BPTT keeps the computational benefits of Backpropagation Through Time (BPTT) while relieving the need for a complete backtrack through the whole data sequence at every step."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "It indicates that the distribution must be truncated within the given range, and rescaled appropriately. In this particular case, a truncated normal distribution arises. Sampling from this distribution depends on how much is truncated."}, {"text": "It indicates that the distribution must be truncated within the given range, and rescaled appropriately. In this particular case, a truncated normal distribution arises. Sampling from this distribution depends on how much is truncated."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "The interquartile mean is a specific example of a truncated mean. It is simply the arithmetic mean after removing the lowest and the highest quarter of values."}, {"text": "The interquartile mean is a specific example of a truncated mean. It is simply the arithmetic mean after removing the lowest and the highest quarter of values."}, {"text": "The interquartile mean is a specific example of a truncated mean. It is simply the arithmetic mean after removing the lowest and the highest quarter of values."}]}, {"question": "What is meant by Hyperplane", "positive_ctxs": [{"text": "In geometry, a hyperplane is a subspace whose dimension is one less than that of its ambient space. If a space is 3-dimensional then its hyperplanes are the 2-dimensional planes, while if the space is 2-dimensional, its hyperplanes are the 1-dimensional lines."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "But the original use of the phrase \"complete Archimedean field\" was by David Hilbert, who meant still something else by it. He meant that the real numbers form the largest Archimedean field in the sense that every other Archimedean field is a subfield of"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}]}, {"question": "What is the best Python library for Hidden Markov Models", "positive_ctxs": [{"text": "HMMs is the Hidden Markov Models library for Python. It is easy to use, general purpose library, implementing all the important submethods, needed for the training, examining and experimenting with the data models."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Hierarchical Markov models can be applied to categorize human behavior at various levels of abstraction. For example, a series of simple observations, such as a person's location in a room, can be interpreted to determine more complex information, such as in what task or activity the person is performing. Two kinds of Hierarchical Markov Models are the Hierarchical hidden Markov model and the Abstract Hidden Markov Model."}, {"text": "Such a model is called a hierarchical Dirichlet process hidden Markov model, or HDP-HMM for short. It was originally described under the name \"Infinite Hidden Markov Model\"[3] and was further formalized in[4]."}, {"text": "An example could be the activity of preparing a stir fry, which can be broken down into the subactivities or actions of cutting vegetables, frying the vegetables in a pan and serving it on a plate. Examples of such a hierarchical model are Layered Hidden Markov Models (LHMMs) and the hierarchical hidden Markov model (HHMM), which have been shown to significantly outperform its non-hierarchical counterpart in activity recognition."}, {"text": "A Hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states. An HMM can be considered as the simplest dynamic Bayesian network. HMM models are widely used in speech recognition, for translating a time series of spoken words into text."}, {"text": "A Hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states. An HMM can be considered as the simplest dynamic Bayesian network. HMM models are widely used in speech recognition, for translating a time series of spoken words into text."}, {"text": "A Hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states. An HMM can be considered as the simplest dynamic Bayesian network. HMM models are widely used in speech recognition, for translating a time series of spoken words into text."}, {"text": "In Python with the NumPy numerical library or the SymPy symbolic library, multiplication of array objects as a1*a2 produces the Hadamard product, but otherwise multiplication as a1@a2 or matrix objects m1*m2 will produce a matrix product. The Eigen C++ library provides a cwiseProduct member function for the Matrix class (a.cwiseProduct(b)), while the Armadillo library uses the operator % to make compact expressions (a % b; a * b is a matrix product)."}]}, {"question": "What is the relationship between the p value of a t test and the Type I and Type II errors", "positive_ctxs": [{"text": "For example, a p-value of 0.01 would mean there is a 1% chance of committing a Type I error. However, using a lower value for alpha means that you will be less likely to detect a true difference if one really exists (thus risking a type II error)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "By selecting a low threshold (cut-off) value and modifying the alpha (p) level, the quality of the hypothesis test can be increased. The knowledge of Type I errors and Type II errors is widely used in medical science, biometrics and computer science."}, {"text": "By selecting a low threshold (cut-off) value and modifying the alpha (p) level, the quality of the hypothesis test can be increased. The knowledge of Type I errors and Type II errors is widely used in medical science, biometrics and computer science."}, {"text": "By selecting a low threshold (cut-off) value and modifying the alpha (p) level, the quality of the hypothesis test can be increased. The knowledge of Type I errors and Type II errors is widely used in medical science, biometrics and computer science."}, {"text": "By selecting a low threshold (cut-off) value and modifying the alpha (p) level, the quality of the hypothesis test can be increased. The knowledge of Type I errors and Type II errors is widely used in medical science, biometrics and computer science."}, {"text": "By selecting a low threshold (cut-off) value and modifying the alpha (p) level, the quality of the hypothesis test can be increased. The knowledge of Type I errors and Type II errors is widely used in medical science, biometrics and computer science."}, {"text": "The crossover error rate (CER) is the point at which Type I errors and Type II errors are equal and represents the best way of measuring a biometrics' effectiveness. A system with a lower CER value provides more accuracy than a system with a higher CER value."}, {"text": "The crossover error rate (CER) is the point at which Type I errors and Type II errors are equal and represents the best way of measuring a biometrics' effectiveness. A system with a lower CER value provides more accuracy than a system with a higher CER value."}]}, {"question": "Why is the complexity of DFS o v e", "positive_ctxs": [{"text": "It's O(V+E) because each visit to v of V must visit each e of E where |e| <= V-1. Since there are V visits to v of V then that is O(V). So total time complexity is O(V + E)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "be the ordering computed by the standard recursive DFS algorithm. This ordering is called the lexicographic depth-first search ordering. John Reif considered the complexity of computing the lexicographic depth-first search ordering, given a graph and a source."}, {"text": "Set theory begins with a fundamental binary relation between an object o and a set A. If o is a member (or element) of A, the notation o \u2208 A is used. A set is described by listing elements separated by commas, or by a characterizing property of its elements, within braces { }."}, {"text": "These two variations of DFS visit the neighbors of each vertex in the opposite order from each other: the first neighbor of v visited by the recursive variation is the first one in the list of adjacent edges, while in the iterative variation the first visited neighbor is the last one in the list of adjacent edges. The recursive implementation will visit the nodes from the example graph in the following order: A, B, D, F, E, C, G. The non-recursive implementation will visit the nodes as: A, E, F, B, D, C, G."}, {"text": "For applications of DFS in relation to specific domains, such as searching for solutions in artificial intelligence or web-crawling, the graph to be traversed is often either too large to visit in its entirety or infinite (DFS may suffer from non-termination). In such cases, search is only performed to a limited depth; due to limited resources, such as memory or disk space, one typically does not use data structures to keep track of the set of all previously visited vertices. When search is performed to a limited depth, the time is still linear in terms of the number of expanded vertices and edges (although this number is not the same as the size of the entire graph because some vertices may be searched more than once and others not at all) but the space complexity of this variant of DFS is only proportional to the depth limit, and as a result, is much smaller than the space needed for searching to the same depth using breadth-first search."}, {"text": "The notation + is usually reserved for commutative binary operations (operations where x + y = y + x for all x, y). If such an operation admits an identity element o (such that x + o ( = o + x ) = x for all x), then this element is unique ( o\u2032 = o\u2032 + o = o ). For a given x , if there exists x\u2032 such that x + x\u2032 ( = x\u2032 + x ) = o , then x\u2032 is called an additive inverse of x."}, {"text": "A decision version of the problem (testing whether some vertex u occurs before some vertex v in this order) is P-complete, meaning that it is \"a nightmare for parallel processing\".A depth-first search ordering (not necessarily the lexicographic one), can be computed by a randomized parallel algorithm in the complexity class RNC. As of 1997, it remained unknown whether a depth-first traversal could be constructed by a deterministic parallel algorithm, in the complexity class NC."}, {"text": "For such applications, DFS also lends itself much better to heuristic methods for choosing a likely-looking branch. When an appropriate depth limit is not known a priori, iterative deepening depth-first search applies DFS repeatedly with a sequence of increasing limits. In the artificial intelligence mode of analysis, with a branching factor greater than one, iterative deepening increases the running time by only a constant factor over the case in which the correct depth limit is known due to the geometric growth of the number of nodes per level."}]}, {"question": "What is supervised and unsupervised learning explain with the examples", "positive_ctxs": [{"text": "In a supervised learning model, the algorithm learns on a labeled dataset, providing an answer key that the algorithm can use to evaluate its accuracy on training data. An unsupervised model, in contrast, provides unlabeled data that the algorithm tries to make sense of by extracting features and patterns on its own."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The goals of learning are understanding and prediction. Learning falls into many categories, including supervised learning, unsupervised learning, online learning, and reinforcement learning. From the perspective of statistical learning theory, supervised learning is best understood."}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "What emerges then is that info-gap theory is yet to explain in what way, if any, it actually attempts to deal with the severity of the uncertainty under consideration. Subsequent sections of this article will address this severity issue and its methodological and practical implications."}]}, {"question": "How does machine learning clean data", "positive_ctxs": [{"text": "Best Practices of Data CleaningSetting up a Quality Plan. RELATED BLOG. Fill-out missing values. One of the first steps of fixing errors in your dataset is to find incomplete values and fill them out. Removing rows with missing values. Fixing errors in the structure. Reducing data for proper data handling."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}]}, {"question": "How do you explain linear regression", "positive_ctxs": [{"text": "In statistics, linear regression is a linear approach to modeling the relationship between a scalar response (or dependent variable) and one or more explanatory variables (or independent variables). The case of one explanatory variable is called simple linear regression. Such models are called linear models."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influence a child's cognitive functioning?). Then you can still do research, but it is not causal, it is correlational."}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}]}, {"question": "What is the future of AI and machine learning", "positive_ctxs": [{"text": "Some business analysts at claim that AI is a game changer for the personal device market. By 2020, about 60 percent of personal-device technology vendors will depend on AI-enabled Cloud platforms to deliver enhanced functionality and personalized services. AI technology will deliver an \u201cemotional user experience.\u201d"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common."}, {"text": "The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common."}, {"text": "The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common."}, {"text": "The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common."}, {"text": "The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common."}, {"text": "The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common."}, {"text": "The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common."}]}, {"question": "Is linear regression non parametric", "positive_ctxs": [{"text": "Linear models, generalized linear models, and nonlinear models are examples of parametric regression models because we know the function that describes the relationship between the response and explanatory variables. If the relationship is unknown and nonlinear, nonparametric regression models should be used."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "There are several common parametric empirical Bayes models, including the Poisson\u2013gamma model (below), the Beta-binomial model, the Gaussian\u2013Gaussian model, the Dirichlet-multinomial model, as well specific models for Bayesian linear regression (see below) and Bayesian multivariate linear regression. More advanced approaches include hierarchical Bayes models and Bayesian mixture models."}, {"text": "can not be described by the linear relationship, then one can find some non linear functional relationship between the response and predictor, and extend the model to nonlinear mixed-effects model. For example, when the response"}]}, {"question": "How do you find the maximum likelihood estimator", "positive_ctxs": [{"text": "Definition: Given data the maximum likelihood estimate (MLE) for the parameter p is the value of p that maximizes the likelihood P(data |p). That is, the MLE is the value of p for which the data is most likely. 100 P(55 heads|p) = ( 55 ) p55(1 \u2212 p)45."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "In some cases, the first-order conditions of the likelihood function can be solved explicitly; for instance, the ordinary least squares estimator maximizes the likelihood of the linear regression model. Under most circumstances, however, numerical methods will be necessary to find the maximum of the likelihood function."}, {"text": "In some cases, the first-order conditions of the likelihood function can be solved explicitly; for instance, the ordinary least squares estimator maximizes the likelihood of the linear regression model. Under most circumstances, however, numerical methods will be necessary to find the maximum of the likelihood function."}, {"text": "In some cases, the first-order conditions of the likelihood function can be solved explicitly; for instance, the ordinary least squares estimator maximizes the likelihood of the linear regression model. Under most circumstances, however, numerical methods will be necessary to find the maximum of the likelihood function."}, {"text": "Another estimator which is asymptotically normal and efficient is the maximum likelihood estimator (MLE). The relations between the maximum likelihood and Bayes estimators can be shown in the following simple example."}, {"text": "A maximum likelihood estimator coincides with the most probable Bayesian estimator given a uniform prior distribution on the parameters. Indeed, the maximum a posteriori estimate is the parameter \u03b8 that maximizes the probability of \u03b8 given the data, given by Bayes' theorem:"}, {"text": "A maximum likelihood estimator coincides with the most probable Bayesian estimator given a uniform prior distribution on the parameters. Indeed, the maximum a posteriori estimate is the parameter \u03b8 that maximizes the probability of \u03b8 given the data, given by Bayes' theorem:"}]}, {"question": "How do you find the mean in statistics", "positive_ctxs": [{"text": "How to Find the Mean. The mean is the average of the numbers. It is easy to calculate: add up all the numbers, then divide by how many numbers there are. In other words it is the sum divided by the count."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Aspect is unusual in ASL in that transitive verbs derived for aspect lose their transitivity. That is, while you can sign 'dog chew bone' for the dog chewed on a bone, or 'she look-at me' for she looked at me, you cannot do the same in the durative to mean the dog gnawed on the bone or she stared at me. Instead, you must use other strategies, such as a topic construction (see below) to avoid having an object for the verb."}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "It is a common practice to use a one-tailed hypothesis by default. However, \"If you do not have a specific direction firmly in mind in advance, use a two-sided alternative. Moreover, some users of statistics argue that we should always work with the two-sided alternative."}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}]}, {"question": "When would you use exponential smoothing", "positive_ctxs": [{"text": "Exponential smoothing is a way to smooth out data for presentations or to make forecasts. It's usually used for finance and economics. If you have a time series with a clear pattern, you could use moving averages \u2014 but if you don't have a clear pattern you can use exponential smoothing to forecast."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For every exponential smoothing method we also need to choose the value for the smoothing parameters. For simple exponential smoothing, there is only one smoothing parameter (\u03b1), but for the methods that follow there is usually more than one smoothing parameter."}, {"text": "They differ in that exponential smoothing takes into account all past data, whereas moving average only takes into account k past data points. Computationally speaking, they also differ in that moving average requires that the past k data points, or the data point at lag k + 1 plus the most recent forecast value, to be kept, whereas exponential smoothing only needs the most recent forecast value to be kept.In the signal processing literature, the use of non-causal (symmetric) filters is commonplace, and the exponential window function is broadly used in this fashion, but a different terminology is used: exponential smoothing is equivalent to a first-order infinite-impulse response (IIR) filter and moving average is equivalent to a finite impulse response filter with equal weighting factors."}, {"text": "Triple exponential smoothing applies exponential smoothing three times, which is commonly used when there are three high frequency signals to be removed from a time series under study. There are different types of seasonality: 'multiplicative' and 'additive' in nature, much like addition and multiplication are basic operations in mathematics."}, {"text": "The use of the exponential window function is first attributed to Poisson as an extension of a numerical analysis technique from the 17th century, and later adopted by the signal processing community in the 1940s. Here, exponential smoothing is the application of the exponential, or Poisson, window function. Exponential smoothing was first suggested in the statistical literature without citation to previous work by Robert Goodell Brown in 1956, and then expanded by Charles C. Holt in 1957."}, {"text": "The basic idea behind double exponential smoothing is to introduce a term to take into account the possibility of a series exhibiting some form of trend. This slope component is itself updated via exponential smoothing."}, {"text": "Simple exponential smoothing does not do well when there is a trend in the data, which is inconvenient. In such situations, several methods were devised under the name \"double exponential smoothing\" or \"second-order exponential smoothing,\" which is the recursive application of an exponential filter twice, thus being termed \"double exponential smoothing\". This nomenclature is similar to quadruple exponential smoothing, which also references its recursion depth."}, {"text": "There are cases where the smoothing parameters may be chosen in a subjective manner \u2013 the forecaster specifies the value of the smoothing parameters based on previous experience. However, a more robust and objective way to obtain values for the unknown parameters included in any exponential smoothing method is to estimate them from the observed data."}]}, {"question": "How can we control the power factor", "positive_ctxs": [{"text": "Improving the PF can maximize current-carrying capacity, improve voltage to equipment, reduce power losses, and lower electric bills. The simplest way to improve power factor is to add PF correction capacitors to the electrical system. PF correction capacitors act as reactive current generators."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect."}, {"text": "How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect."}, {"text": "How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect."}, {"text": "How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect."}, {"text": "When we control for the effect of CVs on the DV, we remove it from the denominator making F larger, thereby increasing your power to find a significant effect if one exists at all."}, {"text": "When we control for the effect of CVs on the DV, we remove it from the denominator making F larger, thereby increasing your power to find a significant effect if one exists at all."}, {"text": "Algorithmic probability deals with the following questions: Given a body of data about some phenomenon that we want to understand, how can we select the most probable hypothesis of how it was caused from among all possible hypotheses and how can we evaluate the different hypotheses? How can we predict future data and how can we measure the likelihood of that prediction being the right one?"}]}, {"question": "What is the difference between extrapolation and interpolation", "positive_ctxs": [{"text": "Interpolation refers to using the data in order to predict data within the dataset. Extrapolation is the use of the data set to predict beyond the data set."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This is often done by using a related series known for all relevant dates. Alternatively polynomial interpolation or spline interpolation is used where piecewise polynomial functions are fit into time intervals such that they fit smoothly together. A different problem which is closely related to interpolation is the approximation of a complicated function by a simple function (also called regression).The main difference between regression and interpolation is that polynomial regression gives a single polynomial that models the entire data set."}, {"text": "This is often done by using a related series known for all relevant dates. Alternatively polynomial interpolation or spline interpolation is used where piecewise polynomial functions are fit into time intervals such that they fit smoothly together. A different problem which is closely related to interpolation is the approximation of a complicated function by a simple function (also called regression).The main difference between regression and interpolation is that polynomial regression gives a single polynomial that models the entire data set."}, {"text": "This is often done by using a related series known for all relevant dates. Alternatively polynomial interpolation or spline interpolation is used where piecewise polynomial functions are fit into time intervals such that they fit smoothly together. A different problem which is closely related to interpolation is the approximation of a complicated function by a simple function (also called regression).The main difference between regression and interpolation is that polynomial regression gives a single polynomial that models the entire data set."}, {"text": "Performing extrapolation relies strongly on the regression assumptions. The further the extrapolation goes outside the data, the more room there is for the model to fail due to differences between the assumptions and the sample data or the true values."}, {"text": "Performing extrapolation relies strongly on the regression assumptions. The further the extrapolation goes outside the data, the more room there is for the model to fail due to differences between the assumptions and the sample data or the true values."}, {"text": "Performing extrapolation relies strongly on the regression assumptions. The further the extrapolation goes outside the data, the more room there is for the model to fail due to differences between the assumptions and the sample data or the true values."}, {"text": "Performing extrapolation relies strongly on the regression assumptions. The further the extrapolation goes outside the data, the more room there is for the model to fail due to differences between the assumptions and the sample data or the true values."}]}, {"question": "What is MDP in machine learning", "positive_ctxs": [{"text": "Machine Learning: Reinforcement Learning \u2014 Markov Decision Processes. A mathematical representation of a complex decision making process is \u201cMarkov Decision Processes\u201d (MDP). MDP is defined by: A state S, which represents every state that one could be in, within a defined world."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Another application of MDP process in machine learning theory is called learning automata. This is also one type of reinforcement learning if the environment is stochastic. The first detail learning automata paper is surveyed by Narendra and Thathachar (1974), which were originally described explicitly as finite state automata."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "The environment is typically stated in the form of a Markov decision process (MDP), because many reinforcement learning algorithms for this context use dynamic programming techniques. The main difference between the classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the MDP and they target large MDPs where exact methods become infeasible."}, {"text": "The environment is typically stated in the form of a Markov decision process (MDP), because many reinforcement learning algorithms for this context use dynamic programming techniques. The main difference between the classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the MDP and they target large MDPs where exact methods become infeasible."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "What are sample moments", "positive_ctxs": [{"text": "Sample moments are those that are utilized to approximate the unknown population moments. Sample moments are calculated from the sample data. Such moments include mean, variance, skewness, and kurtosis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the sample size. How many units must be collected for the experiment to be generalisable and have enough power?"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "of a beta distribution supported in the [a, c] interval -see section \"Alternative parametrizations, Four parameters\"-) can be estimated, using the method of moments developed by Karl Pearson, by equating sample and population values of the first four central moments (mean, variance, skewness and excess kurtosis). The excess kurtosis was expressed in terms of the square of the skewness, and the sample size \u03bd = \u03b1 + \u03b2, (see previous section \"Kurtosis\") as follows:"}, {"text": "This is closely related to the method of moments for estimation. A simple example arises where the quantity to be estimated is the mean, in which case a natural estimate is the sample mean. The usual arguments indicate that the sample variance can be used to estimate the variance of the sample mean."}, {"text": "This is closely related to the method of moments for estimation. A simple example arises where the quantity to be estimated is the mean, in which case a natural estimate is the sample mean. The usual arguments indicate that the sample variance can be used to estimate the variance of the sample mean."}, {"text": "The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X \u2212 E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions."}, {"text": "The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X \u2212 E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions."}]}, {"question": "What is the difference between on policy and off policy", "positive_ctxs": [{"text": "For example, Q-learning is an off-policy learner. On-policy methods attempt to evaluate or improve the policy that is used to make decisions. In contrast, off-policy methods evaluate or improve a policy different from that used to generate the data.11\u200f/04\u200f/2020"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "An important distinction in RL is the difference between on-policy algorithms that require evaluating or improving the policy that collects data, and off-policy algorithms that can learn a policy from data generated by an arbitrary policy. Generally, value-function based methods such as Q-learning are better suited for off-policy learning and have better sample-efficiency - the amount of data required to learn a task is reduced because data is re-used for learning. At the extreme, offline (or \"batch\") RL considers learning a policy from a fixed dataset without additional interaction with the environment."}, {"text": "AIX 5 implements the following scheduling policies: FIFO, round robin, and a fair round robin. The FIFO policy has three different implementations: FIFO, FIFO2, and FIFO3. The round robin policy is named SCHED_RR in AIX, and the fair round robin is called SCHED_OTHER."}, {"text": "Much like the effect on consumers, the effect of standardization on technology and innovation is mixed. Meanwhile, the various links between research and standardization have been identified, also as a platform of knowledge transfer and translated into policy measures (e.g."}, {"text": "When the scheduling policy is dynamic in the sense that it can make adjustments during the process based on up-to-date information, posterior Gittins index is developed to find the optimal policy that minimizes the expected discounted reward in the class of dynamic policies."}, {"text": "A policy that achieves these optimal values in each state is called optimal. Clearly, a policy that is optimal in this strong sense is also optimal in the sense that it maximizes the expected return"}, {"text": "A policy that achieves these optimal values in each state is called optimal. Clearly, a policy that is optimal in this strong sense is also optimal in the sense that it maximizes the expected return"}, {"text": "New approaches start to be developed in ERA in order to quantify this risk and to communicate effectively on it with both the managers and the general public.Ecological risk assessment is complicated by the fact that there are many nonchemical stressors that substantially influence ecosystems, communities, and individual plants and animals, as well as across landscapes and regions. Defining the undesired (adverse) event is a political or policy judgment, further complicating applying traditional risk analysis tools to ecological systems. Much of the policy debate surrounding ecological risk assessment is over defining precisely what is an adverse event."}]}, {"question": "How does extended Kalman filter work", "positive_ctxs": [{"text": "In the extended Kalman filter, the state transition and observation models don't need to be linear functions of the state but may instead be differentiable functions. These matrices can be used in the Kalman filter equations. This process essentially linearizes the non-linear function around the current estimate."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Though regardless of Gaussianity, if the process and measurement covariances are known, the Kalman filter is the best possible linear estimator in the minimum mean-square-error sense.Extensions and generalizations to the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. The underlying model is a hidden Markov model where the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions. Also, Kalman filter has been successfully used in multi-sensor fusion, and distributed sensor networks to develop distributed or consensus Kalman filter."}, {"text": "\u2014are highly nonlinear, the extended Kalman filter can give particularly poor performance. This is because the covariance is propagated through linearization of the underlying nonlinear model. The unscented Kalman filter (UKF) uses a deterministic sampling technique known as the unscented transformation (UT) to pick a minimal set of sample points (called sigma points) around the mean."}, {"text": "The second differential equation, for the covariance, is an example of a Riccati equation. Nonlinear generalizations to Kalman\u2013Bucy filters include continuous time extended Kalman filter and cubic kalman filter."}, {"text": "In the extended Kalman filter (EKF), the state transition and observation models need not be linear functions of the state but may instead be nonlinear functions. These functions are of differentiable type."}, {"text": "In simple cases, such as the linear dynamical system just mentioned, exact inference is tractable (in this case, using the Kalman filter); however, in general, exact inference in HMMs with continuous latent variables is infeasible, and approximate methods must be used, such as the extended Kalman filter or the particle filter."}, {"text": "The particle filter central to MCL can approximate multiple different kinds of probability distributions, since it is a non-parametric representation. Some other Bayesian localization algorithms, such as the Kalman filter (and variants, the extended Kalman filter and the unscented Kalman filter), assume the belief of the robot is close to being a Gaussian distribution and do not perform well for situations where the belief is multimodal. For example, a robot in a long corridor with many similar-looking doors may arrive at a belief that has a peak for each door, but the robot is unable to distinguish which door it is at."}, {"text": "are Gaussian, the Kalman filter finds the exact Bayesian filtering distribution. If not, Kalman filter based methods are a first-order approximation (EKF) or a second-order approximation (UKF in general, but if probability distribution is Gaussian a third-order approximation is possible)."}]}, {"question": "What is causal analysis and resolution", "positive_ctxs": [{"text": "The purpose of Causal Analysis and Resolution (CAR) is to identify causes of defects and other problems and take action to prevent them from occurring in the future. Introductory Notes The Causal Analysis and Resolution process area involves the following: Identifying and analyzing causes of defects and other problems."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Exploratory causal analysis, also known as \"data causality\" or \"causal discovery\" is the use of statistical algorithms to infer associations in observed data sets that are potentially causal under strict assumptions. ECA is a type of causal inference distinct from causal modeling and treatment effects in randomized controlled trials. It is exploratory research usually preceding more formal causal research in the same way exploratory data analysis often precedes statistical hypothesis testing in data analysis"}, {"text": "Cognitive resolution is the way disputants understand and view the conflict, with beliefs, perspectives, understandings and attitudes. Emotional resolution is in the way disputants feel about a conflict, the emotional energy. Behavioral resolution is reflective of how the disputants act, their behavior."}, {"text": "Regression analysis is primarily used for two conceptually distinct purposes. First, regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Second, in some situations regression analysis can be used to infer causal relationships between the independent and dependent variables."}, {"text": "Regression analysis is primarily used for two conceptually distinct purposes. First, regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Second, in some situations regression analysis can be used to infer causal relationships between the independent and dependent variables."}, {"text": "Regression analysis is primarily used for two conceptually distinct purposes. First, regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Second, in some situations regression analysis can be used to infer causal relationships between the independent and dependent variables."}, {"text": "Regression analysis is primarily used for two conceptually distinct purposes. First, regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Second, in some situations regression analysis can be used to infer causal relationships between the independent and dependent variables."}]}, {"question": "What is control problem", "positive_ctxs": [{"text": "A control problem involves a system that is described by state variables. The problem is to find a time control stratergy to make the system reach the terget state that is find conditions for application of force as a function of the control variables of the system (V,W,Th)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. A control problem includes a cost functional that is a function of state and control variables. An optimal control is a set of differential equations describing the paths of the control variables that minimize the cost function."}, {"text": "A special case of the general nonlinear optimal control problem given in the previous section is the linear quadratic (LQ) optimal control problem. The LQ problem is stated as follows. Minimize the quadratic continuous-time cost functional"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "The mathematical statement of this problem is as follows: pick a random permutation on n elements and k values from the range 1 to n, also at random, call these marks. What is the probability that there is at least one mark on every cycle of the permutation? The claim is this probability is k/n."}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}]}, {"question": "What is tradeoff between bias and variance", "positive_ctxs": [{"text": "You now know that: Bias is the simplifying assumptions made by the model to make the target function easier to approximate. Variance is the amount that the estimate of the target function will change given different training data. Trade-off is tension between the error introduced by the bias and the variance."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "if it predicts different output values when trained on different training sets. The prediction error of a learned classifier is related to the sum of the bias and the variance of the learning algorithm. Generally, there is a tradeoff between bias and variance."}, {"text": "if it predicts different output values when trained on different training sets. The prediction error of a learned classifier is related to the sum of the bias and the variance of the learning algorithm. Generally, there is a tradeoff between bias and variance."}, {"text": "A first issue is the tradeoff between bias and variance. Imagine that we have available several different, but equally good, training data sets. A learning algorithm is biased for a particular input"}, {"text": "A first issue is the tradeoff between bias and variance. Imagine that we have available several different, but equally good, training data sets. A learning algorithm is biased for a particular input"}, {"text": "A learning algorithm with low bias must be \"flexible\" so that it can fit the data well. But if the learning algorithm is too flexible, it will fit each training data set differently, and hence have high variance. A key aspect of many supervised learning methods is that they are able to adjust this tradeoff between bias and variance (either automatically or by providing a bias/variance parameter that the user can adjust)."}, {"text": "A learning algorithm with low bias must be \"flexible\" so that it can fit the data well. But if the learning algorithm is too flexible, it will fit each training data set differently, and hence have high variance. A key aspect of many supervised learning methods is that they are able to adjust this tradeoff between bias and variance (either automatically or by providing a bias/variance parameter that the user can adjust)."}, {"text": "If the parameter is the bull's-eye of a target, and the arrows are estimates, then a relatively high absolute value for the bias means the average position of the arrows is off-target, and a relatively low absolute bias means the average position of the arrows is on target. They may be dispersed, or may be clustered. The relationship between bias and variance is analogous to the relationship between accuracy and precision."}]}, {"question": "How do you handle high variance data", "positive_ctxs": [{"text": "You can reduce High variance, by reducing the number of features in the model. There are several methods available to check which features don't add much value to the model and which are of importance. Increasing the size of the training set can also help the model generalise."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "How high is the probability they really are drunk?Many would answer as high as 95%, but the correct probability is about 2%."}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Try seeing what happens if you use independent subsets of your data for estimation and apply those estimates to the whole data set. Theoretically you should obtain somewhat higher variance from the smaller datasets used for estimation, but the expectation of the coefficient values should be the same. Naturally, the observed coefficient values will vary, but look at how much they vary."}, {"text": "Underfitting occurs when a statistical model or machine learning algorithm cannot adequately capture the underlying structure of the data. It occurs when the model or algorithm does not fit the data enough. Underfitting occurs if the model or algorithm shows low variance but high bias (to contrast the opposite, overfitting from high variance and low bias)."}]}, {"question": "Why do we use standard deviation instead of mean deviation", "positive_ctxs": [{"text": "If the data is symmetrical - normally distributed - then the mean tell you where the line of symmetry falls. The standard deviation tells you more. It tells you if the data is closely distributed to the mean (small standard deviation) or is the data widely distributed (big standard deviation)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Often, we want some information about the precision of the mean we obtained. We can obtain this by determining the standard deviation of the sampled mean. Assuming statistical independence of the values in the sample, the standard deviation of the mean is related to the standard deviation of the distribution by:"}, {"text": "Often, we want some information about the precision of the mean we obtained. We can obtain this by determining the standard deviation of the sampled mean. Assuming statistical independence of the values in the sample, the standard deviation of the mean is related to the standard deviation of the distribution by:"}, {"text": "Small samples are somewhat more likely to underestimate the population standard deviation and have a mean that differs from the true population mean, and the Student t-distribution accounts for the probability of these events with somewhat heavier tails compared to a Gaussian. To estimate the standard error of a Student t-distribution it is sufficient to use the sample standard deviation \"s\" instead of \u03c3, and we could use this value to calculate confidence intervals."}, {"text": "Small samples are somewhat more likely to underestimate the population standard deviation and have a mean that differs from the true population mean, and the Student t-distribution accounts for the probability of these events with somewhat heavier tails compared to a Gaussian. To estimate the standard error of a Student t-distribution it is sufficient to use the sample standard deviation \"s\" instead of \u03c3, and we could use this value to calculate confidence intervals."}, {"text": "Small samples are somewhat more likely to underestimate the population standard deviation and have a mean that differs from the true population mean, and the Student t-distribution accounts for the probability of these events with somewhat heavier tails compared to a Gaussian. To estimate the standard error of a Student t-distribution it is sufficient to use the sample standard deviation \"s\" instead of \u03c3, and we could use this value to calculate confidence intervals."}, {"text": "However, for skewed beta distributions such that \u03b1 \u2192 0 or \u03b2 \u2192 0, the ratio of the standard deviation to the mean absolute deviation approaches infinity (although each of them, individually, approaches zero) because the mean absolute deviation approaches zero faster than the standard deviation."}, {"text": "The mean and the standard deviation of a set of data are descriptive statistics usually reported together. In a certain sense, the standard deviation is a \"natural\" measure of statistical dispersion if the center of the data is measured about the mean. This is because the standard deviation from the mean is smaller than from any other point."}]}, {"question": "Does the dependent variable need to be normally distributed in linear regression", "positive_ctxs": [{"text": "No, you don't have to transform your observed variables just because they don't follow a normal distribution. Linear regression analysis, which includes t-test and ANOVA, does not assume normality for either predictors (IV) or an outcome (DV). No way! Yes, you should check normality of errors AFTER modeling."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The fifth issue, concerning the homogeneity of different treatment regression slopes is particularly important in evaluating the appropriateness of ANCOVA model. Also note that we only need the error terms to be normally distributed. In fact both the independent variable and the concomitant variables will not be normally distributed in most cases."}, {"text": "The fifth issue, concerning the homogeneity of different treatment regression slopes is particularly important in evaluating the appropriateness of ANCOVA model. Also note that we only need the error terms to be normally distributed. In fact both the independent variable and the concomitant variables will not be normally distributed in most cases."}, {"text": "One application of normality tests is to the residuals from a linear regression model. If they are not normally distributed, the residuals should not be used in Z tests or in any other tests derived from the normal distribution, such as t tests, F tests and chi-squared tests. If the residuals are not normally distributed, then the dependent variable or at least one explanatory variable may have the wrong functional form, or important variables may be missing, etc."}, {"text": "One application of normality tests is to the residuals from a linear regression model. If they are not normally distributed, the residuals should not be used in Z tests or in any other tests derived from the normal distribution, such as t tests, F tests and chi-squared tests. If the residuals are not normally distributed, then the dependent variable or at least one explanatory variable may have the wrong functional form, or important variables may be missing, etc."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}]}, {"question": "What is bootstrap sampling in machine learning and why is it important 1", "positive_ctxs": [{"text": "The bootstrap method is a resampling technique used to estimate statistics on a population by sampling a dataset with replacement. It can be used to estimate summary statistics such as the mean or standard deviation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Bootstrap aggregating, also called bagging (from bootstrap aggregating), is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression. It also reduces variance and helps to avoid overfitting. Although it is usually applied to decision tree methods, it can be used with any type of method."}, {"text": "Bootstrap aggregating, also called bagging (from bootstrap aggregating), is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression. It also reduces variance and helps to avoid overfitting. Although it is usually applied to decision tree methods, it can be used with any type of method."}, {"text": "Usually the jackknife is easier to apply to complex sampling schemes than the bootstrap. Complex sampling schemes may involve stratification, multiple stages (clustering), varying sampling weights (non-response adjustments, calibration, post-stratification) and under unequal-probability sampling designs. Theoretical aspects of both the bootstrap and the jackknife can be found in Shao and Tu (1995), whereas a basic introduction is accounted in Wolter (2007)."}, {"text": "Usually the jackknife is easier to apply to complex sampling schemes than the bootstrap. Complex sampling schemes may involve stratification, multiple stages (clustering), varying sampling weights (non-response adjustments, calibration, post-stratification) and under unequal-probability sampling designs. Theoretical aspects of both the bootstrap and the jackknife can be found in Shao and Tu (1995), whereas a basic introduction is accounted in Wolter (2007)."}, {"text": "Usually the jackknife is easier to apply to complex sampling schemes than the bootstrap. Complex sampling schemes may involve stratification, multiple stages (clustering), varying sampling weights (non-response adjustments, calibration, post-stratification) and under unequal-probability sampling designs. Theoretical aspects of both the bootstrap and the jackknife can be found in Shao and Tu (1995), whereas a basic introduction is accounted in Wolter (2007)."}, {"text": "Usually the jackknife is easier to apply to complex sampling schemes than the bootstrap. Complex sampling schemes may involve stratification, multiple stages (clustering), varying sampling weights (non-response adjustments, calibration, post-stratification) and under unequal-probability sampling designs. Theoretical aspects of both the bootstrap and the jackknife can be found in Shao and Tu (1995), whereas a basic introduction is accounted in Wolter (2007)."}, {"text": "Usually the jackknife is easier to apply to complex sampling schemes than the bootstrap. Complex sampling schemes may involve stratification, multiple stages (clustering), varying sampling weights (non-response adjustments, calibration, post-stratification) and under unequal-probability sampling designs. Theoretical aspects of both the bootstrap and the jackknife can be found in Shao and Tu (1995), whereas a basic introduction is accounted in Wolter (2007)."}]}, {"question": "How long will it take to learn deep learning", "positive_ctxs": [{"text": "Each of the steps should take about 4\u20136 weeks' time. And in about 26 weeks since the time you started, and if you followed all of the above religiously, you will have a solid foundation in deep learning."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In 2015, Blippar demonstrated a mobile augmented reality application that uses deep learning to recognize objects in real time.In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories.As of 2008, researchers at The University of Texas at Austin (UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor. First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration between U.S. Army Research Laboratory (ARL) and UT researchers. Deep TAMER used deep learning to provide a robot the ability to learn new tasks through observation."}, {"text": "In 2015, Blippar demonstrated a mobile augmented reality application that uses deep learning to recognize objects in real time.In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories.As of 2008, researchers at The University of Texas at Austin (UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor. First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration between U.S. Army Research Laboratory (ARL) and UT researchers. Deep TAMER used deep learning to provide a robot the ability to learn new tasks through observation."}, {"text": "In 2015, Blippar demonstrated a mobile augmented reality application that uses deep learning to recognize objects in real time.In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories.As of 2008, researchers at The University of Texas at Austin (UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor. First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration between U.S. Army Research Laboratory (ARL) and UT researchers. Deep TAMER used deep learning to provide a robot the ability to learn new tasks through observation."}, {"text": "In 2015, Blippar demonstrated a mobile augmented reality application that uses deep learning to recognize objects in real time.In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories.As of 2008, researchers at The University of Texas at Austin (UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor. First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration between U.S. Army Research Laboratory (ARL) and UT researchers. Deep TAMER used deep learning to provide a robot the ability to learn new tasks through observation."}, {"text": "In 2015, Blippar demonstrated a mobile augmented reality application that uses deep learning to recognize objects in real time.In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories.As of 2008, researchers at The University of Texas at Austin (UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor. First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration between U.S. Army Research Laboratory (ARL) and UT researchers. Deep TAMER used deep learning to provide a robot the ability to learn new tasks through observation."}, {"text": "In 2015, Blippar demonstrated a mobile augmented reality application that uses deep learning to recognize objects in real time.In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories.As of 2008, researchers at The University of Texas at Austin (UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor. First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration between U.S. Army Research Laboratory (ARL) and UT researchers. Deep TAMER used deep learning to provide a robot the ability to learn new tasks through observation."}, {"text": "In 2015, Blippar demonstrated a mobile augmented reality application that uses deep learning to recognize objects in real time.In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories.As of 2008, researchers at The University of Texas at Austin (UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor. First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration between U.S. Army Research Laboratory (ARL) and UT researchers. Deep TAMER used deep learning to provide a robot the ability to learn new tasks through observation."}]}, {"question": "What is learning and types of learning in artificial intelligence", "positive_ctxs": [{"text": "there are three general categories of learning that artificial intelligence (AI)/machine learning utilizes to actually learn. They are Supervised Learning, Unsupervised Learning and Reinforcement learning. The machine then maps the inputs and the outputs."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Artificial intelligence and computer vision share other topics such as pattern recognition and learning techniques. Consequently, computer vision is sometimes seen as a part of the artificial intelligence field or the computer science field in general."}, {"text": "Artificial intelligence and computer vision share other topics such as pattern recognition and learning techniques. Consequently, computer vision is sometimes seen as a part of the artificial intelligence field or the computer science field in general."}, {"text": "Artificial intelligence and computer vision share other topics such as pattern recognition and learning techniques. Consequently, computer vision is sometimes seen as a part of the artificial intelligence field or the computer science field in general."}, {"text": "Artificial intelligence and computer vision share other topics such as pattern recognition and learning techniques. Consequently, computer vision is sometimes seen as a part of the artificial intelligence field or the computer science field in general."}, {"text": "Neuroevolution, or neuro-evolution, is a form of artificial intelligence that uses evolutionary algorithms to generate artificial neural networks (ANN), parameters, topology and rules. It is most commonly applied in artificial life, general game playing and evolutionary robotics. The main benefit is that neuroevolution can be applied more widely than supervised learning algorithms, which require a syllabus of correct input-output pairs."}, {"text": "The limitations of the previous type of expert systems have urged researchers to develop new types of approaches. They have developed more efficient, flexible and powerful approaches in order to simulate the human decision-making process. Some of the approaches that researchers have developed are based on new methods of artificial intelligence (AI), and in particular in machine learning and data mining approaches with a feedback mechanism."}, {"text": "The limitations of the previous type of expert systems have urged researchers to develop new types of approaches. They have developed more efficient, flexible and powerful approaches in order to simulate the human decision-making process. Some of the approaches that researchers have developed are based on new methods of artificial intelligence (AI), and in particular in machine learning and data mining approaches with a feedback mechanism."}]}, {"question": "What type of data does machine learning need", "positive_ctxs": [{"text": "Machine learning algorithms are almost always optimized for raw, detailed source data. Thus, the data environment must provision large quantities of raw data for discovery-oriented analytics practices such as data exploration, data mining, statistics, and machine learning."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Self-driving cars encapsulate many machine learning technologies to function: computer vision for analyzing obstacles, machine learning for adapting their pace to the environment (e.g., bumpiness of the road). Due to the potential high number of self-driving cars and the need for them to quickly respond to real world situations, traditional cloud approach may generate safety risks. Federated learning can represent a solution for limiting volume of data transfer and accelerating learning processes."}, {"text": "Self-driving cars encapsulate many machine learning technologies to function: computer vision for analyzing obstacles, machine learning for adapting their pace to the environment (e.g., bumpiness of the road). Due to the potential high number of self-driving cars and the need for them to quickly respond to real world situations, traditional cloud approach may generate safety risks. Federated learning can represent a solution for limiting volume of data transfer and accelerating learning processes."}, {"text": "Error-driven learning is a sub-area of machine learning concerned with how an agent ought to take actions in an environment so as to minimize some error feedback. It is a type of reinforcement learning."}, {"text": "Unsupervised learning (UL) is a type of algorithm that learns patterns from untagged data. The hope is that through mimicry, the machine is forced to build a compact internal representation of its world. In contrast to Supervised Learning (SL) where data is tagged by a human, eg."}, {"text": "Unsupervised learning (UL) is a type of algorithm that learns patterns from untagged data. The hope is that through mimicry, the machine is forced to build a compact internal representation of its world. In contrast to Supervised Learning (SL) where data is tagged by a human, eg."}, {"text": "Unsupervised learning (UL) is a type of algorithm that learns patterns from untagged data. The hope is that through mimicry, the machine is forced to build a compact internal representation of its world. In contrast to Supervised Learning (SL) where data is tagged by a human, eg."}, {"text": "(Of course, this does not completely eliminate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction. )The word \"deep\" in \"deep learning\" refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantial credit assignment path (CAP) depth."}]}, {"question": "Which technique can be implemented if you want to reduce the dimensionality of a certain statistical problem", "positive_ctxs": [{"text": "Here is a brief review of our original seven techniques for dimensionality reduction:Missing Values Ratio. Low Variance Filter. High Correlation Filter. Random Forests/Ensemble Trees. Principal Component Analysis (PCA). Backward Feature Elimination. Forward Feature Construction."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The vector space associated with these vectors is often called the feature space. In order to reduce the dimensionality of the feature space, a number of dimensionality reduction techniques can be employed."}, {"text": "The vector space associated with these vectors is often called the feature space. In order to reduce the dimensionality of the feature space, a number of dimensionality reduction techniques can be employed."}, {"text": "If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute."}, {"text": "Suppose you only observe the empirical mean value, y, of N tosses of a six-sided die. Given that information, you want to infer the probabilities that a specific value of the face will show up in the next toss of the die. You also know that the sum of the probabilities must be 1."}, {"text": "The use of latent variables can serve to reduce the dimensionality of data. Many observable variables can be aggregated in a model to represent an underlying concept, making it easier to understand the data. In this sense, they serve a function similar to that of scientific theories."}, {"text": "The use of latent variables can serve to reduce the dimensionality of data. Many observable variables can be aggregated in a model to represent an underlying concept, making it easier to understand the data. In this sense, they serve a function similar to that of scientific theories."}]}, {"question": "How is a decision tree trained", "positive_ctxs": [{"text": "Decision Trees in Machine Learning. Decision Tree models are created using 2 steps: Induction and Pruning. Induction is where we actually build the tree i.e set all of the hierarchical decision boundaries based on our data. Because of the nature of training decision trees they can be prone to major overfitting."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Rotation forest \u2013 in which every decision tree is trained by first applying principal component analysis (PCA) on a random subset of the input features.A special case of a decision tree is a decision list, which is a one-sided decision tree, so that every internal node has exactly 1 leaf node and exactly 1 internal node as a child (except for the bottommost node, whose only child is a single leaf node). While less expressive, decision lists are arguably easier to understand than general decision trees due to their added sparsity, permit non-greedy learning methods and monotonic constraints to be imposed.Notable decision tree algorithms include:"}, {"text": "Rotation forest \u2013 in which every decision tree is trained by first applying principal component analysis (PCA) on a random subset of the input features.A special case of a decision tree is a decision list, which is a one-sided decision tree, so that every internal node has exactly 1 leaf node and exactly 1 internal node as a child (except for the bottommost node, whose only child is a single leaf node). While less expressive, decision lists are arguably easier to understand than general decision trees due to their added sparsity, permit non-greedy learning methods and monotonic constraints to be imposed.Notable decision tree algorithms include:"}, {"text": "A decision tree or a classification tree is a tree in which each internal (non-leaf) node is labeled with an input feature. The arcs coming from a node labeled with an input feature are labeled with each of the possible values of the target feature or the arc leads to a subordinate decision node on a different input feature. Each leaf of the tree is labeled with a class or a probability distribution over the classes, signifying that the data set has been classified by the tree into either a specific class, or into a particular probability distribution (which, if the decision tree is well-constructed, is skewed towards certain subsets of classes)."}, {"text": "A decision tree or a classification tree is a tree in which each internal (non-leaf) node is labeled with an input feature. The arcs coming from a node labeled with an input feature are labeled with each of the possible values of the target feature or the arc leads to a subordinate decision node on a different input feature. Each leaf of the tree is labeled with a class or a probability distribution over the classes, signifying that the data set has been classified by the tree into either a specific class, or into a particular probability distribution (which, if the decision tree is well-constructed, is skewed towards certain subsets of classes)."}, {"text": "A decision stump is a machine learning model consisting of a one-level decision tree. That is, it is a decision tree with one internal node (the root) which is immediately connected to the terminal nodes (its leaves). A decision stump makes a prediction based on the value of just a single input feature."}, {"text": "An alternating decision tree (ADTree) is a machine learning method for classification. It generalizes decision trees and has connections to boosting."}, {"text": "How to verify that decision rules are consistent with each other is also a challenge when there are too many rules. Usually such problem leads to a satisfiability (SAT) formulation. This is a well-known NP-complete problem Boolean satisfiability problem."}]}, {"question": "What does it mean to normalize a variable", "positive_ctxs": [{"text": "Normalization usually means to scale a variable to have a values between 0 and 1, while standardization transforms data to have a mean of zero and a standard deviation of 1. This standardization is called a z-score, and data points can be standardized with the following formula: A z-score standardizes variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "To find the axes of the ellipsoid, we must first subtract the mean of each variable from the dataset to center the data around the origin. Then, we compute the covariance matrix of the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we must normalize each of the orthogonal eigenvectors to turn them into unit vectors."}, {"text": "To find the axes of the ellipsoid, we must first subtract the mean of each variable from the dataset to center the data around the origin. Then, we compute the covariance matrix of the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we must normalize each of the orthogonal eigenvectors to turn them into unit vectors."}, {"text": "To find the axes of the ellipsoid, we must first subtract the mean of each variable from the dataset to center the data around the origin. Then, we compute the covariance matrix of the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we must normalize each of the orthogonal eigenvectors to turn them into unit vectors."}]}, {"question": "How do you find the optimal number of clusters", "positive_ctxs": [{"text": "The optimal number of clusters can be defined as follow:Compute clustering algorithm (e.g., k-means clustering) for different values of k. For each k, calculate the total within-cluster sum of square (wss).Plot the curve of wss according to the number of clusters k.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "One of the advantages of mean shift over k-means is that the number of clusters is not pre-specified, because mean shift is likely to find only a few clusters if only a small number exist. However, mean shift can be much slower than k-means, and still requires selection of a bandwidth parameter. Mean shift has soft variants."}, {"text": "One of the advantages of mean shift over k-means is that the number of clusters is not pre-specified, because mean shift is likely to find only a few clusters if only a small number exist. However, mean shift can be much slower than k-means, and still requires selection of a bandwidth parameter. Mean shift has soft variants."}, {"text": "One of the advantages of mean shift over k-means is that the number of clusters is not pre-specified, because mean shift is likely to find only a few clusters if only a small number exist. However, mean shift can be much slower than k-means, and still requires selection of a bandwidth parameter. Mean shift has soft variants."}, {"text": "In centroid-based clustering, clusters are represented by a central vector, which may not necessarily be a member of the data set. When the number of clusters is fixed to k, k-means clustering gives a formal definition as an optimization problem: find the k cluster centers and assign the objects to the nearest cluster center, such that the squared distances from the cluster are minimized."}, {"text": "In centroid-based clustering, clusters are represented by a central vector, which may not necessarily be a member of the data set. When the number of clusters is fixed to k, k-means clustering gives a formal definition as an optimization problem: find the k cluster centers and assign the objects to the nearest cluster center, such that the squared distances from the cluster are minimized."}, {"text": "In the second stage, simple random sampling is usually used. It is used separately in every cluster and the numbers of elements selected from different clusters are not necessarily equal. The total number of clusters N, number of clusters selected n, and numbers of elements from selected clusters need to be pre-determined by the survey designer."}]}, {"question": "What is tokenization and how does it work", "positive_ctxs": [{"text": "Credit card tokenization substitutes sensitive customer data with a one-time alphanumeric ID that has no value or connection to the account's owner. This randomly generated token is used to access, pass, transmit and retrieve customer's credit card information safely."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Actually, conflict in itself is not necessarily a negative thing. When handled constructively it can help people to stand up for themselves and others, to evolve and learn how to work together to achieve a mutually satisfactory solution. But if conflict is handled poorly it can cause anger, hurt, divisiveness and more serious problems."}, {"text": "A short written exercise that is often used is the \"one-minute paper\". This is a good way to review materials and provide feedback. However a \"one-minute paper\" does not take one minute and for students to concisely summarize it is suggested that they have at least 10 minutes to work on this exercise."}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}]}, {"question": "Is backpropagation gradient descent", "positive_ctxs": [{"text": "In our implementation of gradient descent, we have used a function compute_gradient(loss) that computes the gradient of a loss operation in our computational graph with respect to the output of every other node n (i.e. the direction of change for n along which the loss increases the most)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Typically, stochastic gradient descent (SGD) is used to train the network. The gradient is computed using backpropagation through structure (BPTS), a variant of backpropagation through time used for recurrent neural networks."}, {"text": "Further proposals include the momentum method, which appeared in Rumelhart, Hinton and Williams' paper on backpropagation learning. Stochastic gradient descent with momentum remembers the update \u0394 w at each iteration, and determines the next update as a linear combination of the gradient and the previous update:"}, {"text": "Further proposals include the momentum method, which appeared in Rumelhart, Hinton and Williams' paper on backpropagation learning. Stochastic gradient descent with momentum remembers the update \u0394 w at each iteration, and determines the next update as a linear combination of the gradient and the previous update:"}, {"text": "Further proposals include the momentum method, which appeared in Rumelhart, Hinton and Williams' paper on backpropagation learning. Stochastic gradient descent with momentum remembers the update \u0394 w at each iteration, and determines the next update as a linear combination of the gradient and the previous update:"}, {"text": "Further proposals include the momentum method, which appeared in Rumelhart, Hinton and Williams' paper on backpropagation learning. Stochastic gradient descent with momentum remembers the update \u0394 w at each iteration, and determines the next update as a linear combination of the gradient and the previous update:"}, {"text": "Further proposals include the momentum method, which appeared in Rumelhart, Hinton and Williams' paper on backpropagation learning. Stochastic gradient descent with momentum remembers the update \u0394 w at each iteration, and determines the next update as a linear combination of the gradient and the previous update:"}, {"text": "Further proposals include the momentum method, which appeared in Rumelhart, Hinton and Williams' paper on backpropagation learning. Stochastic gradient descent with momentum remembers the update \u0394 w at each iteration, and determines the next update as a linear combination of the gradient and the previous update:"}]}, {"question": "What is meant by offline classes", "positive_ctxs": [{"text": "Distance Learning Off-line is a mode of delivery that does not require online participation. You do not have to come to campus. Course materials may be available through the internet, but they can also be mailed to you if you prefer."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "This is probably because offline training is highly biased toward the highly reachable items, and offline testing data is highly influenced by the outputs of the online recommendation module. Researchers have concluded that the results of offline evaluations should be viewed critically."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Note that the final result of an insertion sort is optimum, i.e., a correctly sorted list. For many problems, online algorithms cannot match the performance of offline algorithms. If the ratio between the performance of an online algorithm and an optimal offline algorithm is bounded, the online algorithm is called competitive.Not every offline algorithm has an efficient online counterpart."}, {"text": "In such cases, offline evaluations may use implicit measures of effectiveness. For instance, it may be assumed that a recommender system is effective that is able to recommend as many articles as possible that are contained in a research article's reference list. However, this kind of offline evaluations is seen critical by many researchers."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "But the original use of the phrase \"complete Archimedean field\" was by David Hilbert, who meant still something else by it. He meant that the real numbers form the largest Archimedean field in the sense that every other Archimedean field is a subfield of"}]}, {"question": "How many training examples are required by one shot learning for each class", "positive_ctxs": [{"text": "one training example"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples."}, {"text": "The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples."}, {"text": "The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples."}, {"text": "The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples."}, {"text": "The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples."}, {"text": "Semi-supervised learning falls between unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data). Some of the training examples are missing training labels, yet many machine-learning researchers have found that unlabeled data, when used in conjunction with a small amount of labeled data, can produce a considerable improvement in learning accuracy."}, {"text": "Semi-supervised learning falls between unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data). Some of the training examples are missing training labels, yet many machine-learning researchers have found that unlabeled data, when used in conjunction with a small amount of labeled data, can produce a considerable improvement in learning accuracy."}]}, {"question": "What does normal distribution mean in statistics", "positive_ctxs": [{"text": "Normal distribution, also known as the Gaussian distribution, is a probability distribution that is symmetric about the mean, showing that data near the mean are more frequent in occurrence than data far from the mean. In graph form, normal distribution will appear as a bell curve."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors."}, {"text": "Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors."}, {"text": "Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors."}, {"text": "Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors."}, {"text": "Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors."}, {"text": "For the important case in which the data are hypothesized to be a random sample from a normal distribution, depending on the nature of the test statistic and the hypotheses of interest about its distribution, different null hypothesis tests have been developed. Some such tests are the z-test for hypotheses concerning the mean of a normal distribution with known variance, the t-test based on Student's t-distribution of a suitable statistic for hypotheses concerning the mean of a normal distribution when the variance is unknown, the F-test based on the F-distribution of yet another statistic for hypotheses concerning the variance. For data of other nature, for instance categorical (discrete) data, test statistics might be constructed whose null hypothesis distribution is based on normal approximations to appropriate statistics obtained by invoking the central limit theorem for large samples, as in the case of Pearson's chi-squared test."}, {"text": "(In some instances, frequentist statistics can work around this problem. For example, confidence intervals and prediction intervals in frequentist statistics when constructed from a normal distribution with unknown mean and variance are constructed using a Student's t-distribution. This correctly estimates the variance, due to the fact that (1) the average of normally distributed random variables is also normally distributed; (2) the predictive distribution of a normally distributed data point with unknown mean and variance, using conjugate or uninformative priors, has a student's t-distribution."}]}, {"question": "What is the difference between field experiment and quasi experiment", "positive_ctxs": [{"text": "In a true experiment, participants are randomly assigned to either the treatment or the control group, whereas they are not assigned randomly in a quasi-experiment. Thus, the researcher must try to statistically control for as many of these differences as possible."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Some authors distinguish between a natural experiment and a \"quasi-experiment\". The difference is that in a quasi-experiment the criterion for assignment is selected by the researcher, while in a natural experiment the assignment occurs 'naturally,' without the researcher's intervention."}, {"text": "What is the sample size. How many units must be collected for the experiment to be generalisable and have enough power?"}, {"text": "However, a better experiment is to compute the natural direct effect. (NDE) This is the effect determined by leaving the relationship between X and M untouched while intervening on the relationship between X and Y."}, {"text": "What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, note that the estimates for the items obtained in the second experiment have errors that correlate with each other."}, {"text": "Suppose p is unknown and an experiment is conducted where it is decided ahead of time that sampling will continue until r successes are found. A sufficient statistic for the experiment is k, the number of failures."}, {"text": "Suppose p is unknown and an experiment is conducted where it is decided ahead of time that sampling will continue until r successes are found. A sufficient statistic for the experiment is k, the number of failures."}, {"text": "\"Person-by-treatment\" designs are the most common type of quasi experiment design. In this design, the experimenter measures at least one independent variable. Along with measuring one variable, the experimenter will also manipulate a different independent variable."}]}, {"question": "Why do we use logit transformation", "positive_ctxs": [{"text": "The effect of the logit transformation is primarily to pull out the ends of the distribution. Over a broad range of intermediate values of the proportion (p), the relationship of logit(p) and p is nearly linear."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This logit transformation is the logarithm of the transformation that divides the variable X by its mirror-image (X/(1 - X) resulting in the \"inverted beta distribution\" or beta prime distribution (also known as beta distribution of the second kind or Pearson's Type VI) with support [0, +\u221e). As previously discussed in the section \"Moments of logarithmically transformed random variables,\" the logit transformation"}, {"text": "If one of the shape parameters is known, the problem is considerably simplified. The following logit transformation can be used to solve for the unknown shape parameter (for skewed cases such that"}, {"text": "These metaphors are prevalent in communication and we do not just use them in language; we actually perceive and act in accordance with the metaphors."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "This relative popularity was due to the adoption of the logit outside of bioassay, rather than displacing the probit within bioassay, and its informal use in practice; the logit's popularity is credited to the logit model's computational simplicity, mathematical properties, and generality, allowing its use in varied fields.Various refinements occurred during that time, notably by David Cox, as in Cox (1958).The multinomial logit model was introduced independently in Cox (1966) and Thiel (1969), which greatly increased the scope of application and the popularity of the logit model. In 1973 Daniel McFadden linked the multinomial logit to the theory of discrete choice, specifically Luce's choice axiom, showing that the multinomial logit followed from the assumption of independence of irrelevant alternatives and interpreting odds of alternatives as relative preferences; this gave a theoretical foundation for the logistic regression."}]}, {"question": "How is Knn different from K means clustering", "positive_ctxs": [{"text": "KNN represents a supervised classification algorithm that will give new data points accordingly to the k number or the closest data points, while k-means clustering is an unsupervised clustering algorithm that gathers and groups data into k number of clusters."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Assume that a document is composed of N different words from a total vocabulary of size V, where each word corresponds to one of K possible topics. The distribution of such words could be modelled as a mixture of K different V-dimensional categorical distributions. A model of this sort is commonly termed a topic model."}, {"text": "Assume that a document is composed of N different words from a total vocabulary of size V, where each word corresponds to one of K possible topics. The distribution of such words could be modelled as a mixture of K different V-dimensional categorical distributions. A model of this sort is commonly termed a topic model."}, {"text": "Assume that a document is composed of N different words from a total vocabulary of size V, where each word corresponds to one of K possible topics. The distribution of such words could be modelled as a mixture of K different V-dimensional categorical distributions. A model of this sort is commonly termed a topic model."}, {"text": "A simple agglomerative clustering algorithm is described in the single-linkage clustering page; it can easily be adapted to different types of linkage (see below)."}, {"text": "A simple agglomerative clustering algorithm is described in the single-linkage clustering page; it can easily be adapted to different types of linkage (see below)."}, {"text": "A simple agglomerative clustering algorithm is described in the single-linkage clustering page; it can easily be adapted to different types of linkage (see below)."}, {"text": "A simple agglomerative clustering algorithm is described in the single-linkage clustering page; it can easily be adapted to different types of linkage (see below)."}]}, {"question": "Is Random Forest a classification technique", "positive_ctxs": [{"text": "Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean/average prediction (regression) of the"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Random forests can be used to rank the importance of variables in a regression or classification problem in a natural way. The following technique was described in Breiman's original paper and is implemented in the R package randomForest.The first step in measuring the variable importance in a data set"}, {"text": "Lin and Jeon established the connection between random forests and adaptive nearest neighbor, implying that random forests can be seen as adaptive kernel estimates. Davies and Ghahramani proposed Random Forest Kernel and show that it can empirically outperform state-of-art kernel methods. Scornet first defined KeRF estimates and gave the explicit link between KeRF estimates and random forest."}, {"text": "3, which has a goat. He then says to you, \"Do you want to pick door No. Is it to your advantage to switch your choice?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "COBWEB: is an incremental clustering technique that keeps a hierarchical clustering model in the form of a classification tree. For each new point COBWEB descends the tree, updates the nodes along the way and looks for the best node to put the point on (using a category utility function)."}, {"text": "Error that occurs due to natural variation in the process. Random error is typically assumed to be normally distributed with zero mean and a constant variance. Random error is also called experimental error."}]}, {"question": "What does the Z in z score stand for", "positive_ctxs": [{"text": "A Z-score is a numerical measurement that describes a value's relationship to the mean of a group of values. Z-score is measured in terms of standard deviations from the mean. If a Z-score is 0, it indicates that the data point's score is identical to the mean score."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "Especially, Z is distributed uniformly on (-1,+1) and independent of the ratio Y/X, thus, P ( Z \u2264 0.5 | Y/X ) = 0.75. On the other hand, the inequality z \u2264 0.5 holds on an arc of the circle x2 + y2 + z2 = 1, y = cx (for any given c). The length of the arc is 2/3 of the length of the circle."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}]}, {"question": "What are the Hyperparameters of a neural network", "positive_ctxs": [{"text": "Hyperparameters are the variables which determines the network structure(Eg: Number of Hidden Units) and the variables which determine how the network is trained(Eg: Learning Rate). Hyperparameters are set before training(before optimizing the weights and bias)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "(2017) proposed elastic weight consolidation (EWC), a method to sequentially train a single artificial neural network on multiple tasks. This technique supposes that some weights of the trained neural network are more important for previously learned tasks than others. During training of the neural network on a new task, changes to the weights of the network are made less likely the greater their importance."}, {"text": "A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus a neural network is either a biological neural network, made up of real biological neurons, or an artificial neural network, for solving artificial intelligence (AI) problems. The connections of the biological neuron are modeled as weights."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "LeNet is a convolutional neural network structure proposed by Yann LeCun et al. In general, LeNet refers to lenet-5 and is a simple convolutional neural network. Convolutional neural networks are a kind of feed-forward neural network whose artificial neurons can respond to a part of the surrounding cells in the coverage range and perform well in large-scale image processing."}, {"text": "is a set of weights. The optimization problem of finding alpha is readily solved through neural networks, hence a \"meta-network\" where each \"neuron\" is in fact an entire neural network can be trained, and the synaptic weights of the final network is the weight applied to each expert. This is known as a linear combination of experts.It can be seen that most forms of neural networks are some subset of a linear combination: the standard neural net (where only one expert is used) is simply a linear combination with all"}, {"text": "The delta rule is derived by attempting to minimize the error in the output of the neural network through gradient descent. The error for a neural network with"}, {"text": "In semantic hashing documents are mapped to memory addresses by means of a neural network in such a way that semantically similar documents are located at nearby addresses. Deep neural network essentially builds a graphical model of the word-count vectors obtained from a large set of documents. Documents similar to a query document can then be found by simply accessing all the addresses that differ by only a few bits from the address of the query document."}]}, {"question": "Is 0 part of the natural numbers", "positive_ctxs": [{"text": "Natural numbers are a part of the number system which includes all the positive integers from 1 till infinity and are also used for counting purpose. It does not include zero (0). In fact, 1,2,3,4,5,6,7,8,9\u2026., are also called counting numbers."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "for emphasizing that zero is excluded).Texts that exclude zero from the natural numbers sometimes refer to the natural numbers together with zero as the whole numbers, while in other writings, that term is used instead for the integers (including negative integers).The natural numbers are a basis from which many other number sets may be built by extension: the integers, by including (if not yet in) the neutral element 0 and an additive inverse (\u2212n) for each nonzero natural number n; the rational numbers, by including a multiplicative inverse (1/n ) for each nonzero integer n (and also the product of these inverses by integers); the real numbers by including with the rationals the limits of (converging) Cauchy sequences of rationals; the complex numbers, by including with the real numbers the unresolved square root of minus one (and also the sums and products thereof); and so on. These chains of extensions make the natural numbers canonically embedded (identified) in the other number systems."}, {"text": "The set of all real numbers is uncountable, in the sense that while both the set of all natural numbers and the set of all real numbers are infinite sets, there can be no one-to-one function from the real numbers to the natural numbers. In fact, the cardinality of the set of all real numbers, denoted by"}, {"text": "No explicit representation of natural numbers is given. However natural numbers may be constructed by applying the successor function to 0, and then applying other arithmetic functions. A distribution of natural numbers is implied by this, based on the complexity of constructing each number."}, {"text": "Rational numbers are constructed by the division of natural numbers. The simplest representation has no common factors between the numerator and the denominator. This allows the probability distribution of natural numbers may be extended to rational numbers."}, {"text": "However, not all infinite sets have the same cardinality. For example, Georg Cantor (who introduced this concept) demonstrated that the real numbers cannot be put into one-to-one correspondence with the natural numbers (non-negative integers), and therefore that the set of real numbers has a greater cardinality than the set of natural numbers."}, {"text": "An important property of the natural numbers is that they are well-ordered: every non-empty set of natural numbers has a least element. The rank among well-ordered sets is expressed by an ordinal number; for the natural numbers, this is denoted as \u03c9 (omega)."}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}]}, {"question": "How do you show the relationship between two variables in R", "positive_ctxs": [{"text": "SummaryUse the function cor. test(x,y) to analyze the correlation coefficient between two variables and to get significance level of the correlation.Three possible correlation methods using the function cor.test(x,y): pearson, kendall, spearman."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "When two or more random variables are defined on a probability space, it is useful to describe how they vary together; that is, it is useful to measure the relationship between the variables. A common measure of the relationship between two random variables is the covariance. Covariance is a measure of linear relationship between the random variables."}, {"text": "Conventional analysis will yield the dimensionless variable \u03c0 = R g/v2, but offers no insight into the relationship between R and \u03b8."}, {"text": "This is precisely the motivation for including other right-side variables in a multiple regression; but while multiple regression gives unbiased results for the effect size, it does not give a numerical value of a measure of the strength of the relationship between the two variables of interest."}, {"text": "In statistics, collinearity refers to a linear relationship between two explanatory variables. Two variables are perfectly collinear if there is an exact linear relationship between the two, so the correlation between them is equal to 1 or \u22121."}, {"text": "The theory of binding explores the syntactic relationship that exists between coreferential expressions in sentences and texts. When two expressions are coreferential, the one is usually a full form (the antecedent) and the other is an abbreviated form (a proform or anaphor). Linguists use indices to show coreference, as with the i index in the example Billi said hei would come."}, {"text": ", is a nonparametric measure of rank correlation (statistical dependence between the rankings of two variables). It assesses how well the relationship between two variables can be described using a monotonic function."}]}, {"question": "What is difference between quantile and percentile", "positive_ctxs": [{"text": "Quantiles are points in a distribution that relate to the rank order of values in that distribution. Centiles/percentiles are descriptions of quantiles relative to 100; so the 75th percentile (upper quartile) is 75% or three quarters of the way up an ascending list of sorted values of a sample."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Consider the ordered list {1,2,3,4} which contains four data values. What is the 75th percentile of this list using the Microsoft Excel method?"}, {"text": "Consider the ordered list {15, 20, 35, 40, 50}, which contains five data values. What is the 40th percentile of this list using this variant method?"}, {"text": "Consider the ordered list {15, 20, 35, 40, 50}, which contains five data values. What is the 40th percentile of this list using the NIST method?"}, {"text": "The percentile (or percentile score) and the percentile rank are related terms. The percentile rank of a score is the percentage of scores in its distribution that are less than it, an exclusive definition, and one that can be expressed with a single, simple formula. In contrast, there is not one formula or algorithm for a percentile score but many."}, {"text": "The term percentile and the related term percentile rank are often used in the reporting of scores from norm-referenced tests, but, as just noted, they are not the same. For percentile rank, a score is given and a percentage is computed. If the percentile rank for a specified score is 90%, then 90% of the scores were lower."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Closely related to the logit function (and logit model) are the probit function and probit model. The logit and probit are both sigmoid functions with a domain between 0 and 1, which makes them both quantile functions \u2013 i.e., inverses of the cumulative distribution function (CDF) of a probability distribution. In fact, the logit is the quantile function of the logistic distribution, while the probit is the quantile function of the normal distribution."}]}, {"question": "What is bias in machine learning example", "positive_ctxs": [{"text": "Bias machine learning can even be applied when interpreting valid or invalid results from an approved data model. Nearly all of the common machine learning biased data types come from our own cognitive biases. Some examples include Anchoring bias, Availability bias, Confirmation bias, and Stability bias."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What constitutes narrow or wide limits of agreement or large or small bias is a matter of a practical assessment in each case."}, {"text": ", and thus introducing some bias to reduce variance. Furthermore, it is not uncommon in machine learning to have cases where"}, {"text": "What is more there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.A simple example of modus ponens often used in introductory logic books is \"If you are human then you are mortal\". This can be represented in pseudocode as:"}, {"text": "Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased data can result in skewed or undesired predictions. Algorithmic bias is a potential result from data not fully prepared for training."}, {"text": "Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased data can result in skewed or undesired predictions. Algorithmic bias is a potential result from data not fully prepared for training."}, {"text": "Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased data can result in skewed or undesired predictions. Algorithmic bias is a potential result from data not fully prepared for training."}, {"text": "Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased data can result in skewed or undesired predictions. Algorithmic bias is a potential result from data not fully prepared for training."}]}, {"question": "How do you find the distribution in statistics", "positive_ctxs": [{"text": "How to find the mean of the probability distribution: StepsStep 1: Convert all the percentages to decimal probabilities. For example: Step 2: Construct a probability distribution table. Step 3: Multiply the values in each column. Step 4: Add the results from step 3 together."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "It is a common practice to use a one-tailed hypothesis by default. However, \"If you do not have a specific direction firmly in mind in advance, use a two-sided alternative. Moreover, some users of statistics argue that we should always work with the two-sided alternative."}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}]}, {"question": "What is the difference between a t value and p value", "positive_ctxs": [{"text": "A t-value is the relative error difference in contrast to the null hypothesis. A p-value, is the statistical significance of a measurement in how correct a statistical evidence part, is."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The function A(t | \u03bd) is the integral of Student's probability density function, f(t) between \u2212t and t, for t \u2265 0. It thus gives the probability that a value of t less than that calculated from observed data would occur by chance. Therefore, the function A(t | \u03bd) can be used when testing whether the difference between the means of two sets of data is statistically significant, by calculating the corresponding value of t and the probability of its occurrence if the two sets of data were drawn from the same population."}, {"text": "The goal is to find the parameter values for the model that \"best\" fits the data. The fit of a model to a data point is measured by its residual, defined as the difference between the actual value of the dependent variable and the value predicted by the model:"}, {"text": "The goal is to find the parameter values for the model that \"best\" fits the data. The fit of a model to a data point is measured by its residual, defined as the difference between the actual value of the dependent variable and the value predicted by the model:"}, {"text": "Observational error (or measurement error) is the difference between a measured value of a quantity and its true value. In statistics, an error is not a \"mistake\". Variability is an inherent part of the results of measurements and of the measurement process."}, {"text": "Observational error (or measurement error) is the difference between a measured value of a quantity and its true value. In statistics, an error is not a \"mistake\". Variability is an inherent part of the results of measurements and of the measurement process."}, {"text": "Observational error (or measurement error) is the difference between a measured value of a quantity and its true value. In statistics, an error is not a \"mistake\". Variability is an inherent part of the results of measurements and of the measurement process."}, {"text": "In other dimensions, the constant B changes, but the same constant appears both in the t flow and in the coupling flow. The reason is that the derivative with respect to t of the closed loop with a single vertex is a closed loop with two vertices. This means that the only difference between the scaling of the coupling and the t is the combinatorial factors from joining and splitting."}]}, {"question": "What is the difference between correlation coefficient and correlation", "positive_ctxs": [{"text": "Explanation: Correlation is the process of studying the cause and effect relationship that exists between two variables. Correlation coefficient is the measure of the correlation that exists between two variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formally, the partial correlation between X and Y given a set of n controlling variables Z = {Z1, Z2, ..., Zn}, written \u03c1XY\u00b7Z, is the correlation between the residuals eX and eY resulting from the linear regression of X with Z and of Y with Z, respectively. The first-order partial correlation (i.e., when n = 1) is the difference between a correlation and the product of the removable correlations divided by the product of the coefficients of alienation of the removable correlations. The coefficient of alienation, and its relation with joint variance through correlation are available in Guilford (1973, pp."}, {"text": "The sign of the Spearman correlation indicates the direction of association between X (the independent variable) and Y (the dependent variable). If Y tends to increase when X increases, the Spearman correlation coefficient is positive. If Y tends to decrease when X increases, the Spearman correlation coefficient is negative."}, {"text": "It is a corollary of the Cauchy\u2013Schwarz inequality that the absolute value of the Pearson correlation coefficient is not bigger than 1. Therefore, the value of a correlation coefficient ranges between -1 and +1. The correlation coefficient is +1 in the case of a perfect direct (increasing) linear relationship (correlation), \u22121 in the case of a perfect inverse (decreasing) linear relationship (anticorrelation), and some value in the open interval"}, {"text": "It is a corollary of the Cauchy\u2013Schwarz inequality that the absolute value of the Pearson correlation coefficient is not bigger than 1. Therefore, the value of a correlation coefficient ranges between -1 and +1. The correlation coefficient is +1 in the case of a perfect direct (increasing) linear relationship (correlation), \u22121 in the case of a perfect inverse (decreasing) linear relationship (anticorrelation), and some value in the open interval"}, {"text": "If we compute the Pearson correlation coefficient between variables X and Y, the result is approximately 0.970, while if we compute the partial correlation between X and Y, using the formula given above, we find a partial correlation of 0.919. The computations were done using R with the following code."}, {"text": "This is equal to the formula given above. As a correlation coefficient, the Matthews correlation coefficient is the geometric mean of the regression coefficients of the problem and its dual. The component regression coefficients of the Matthews correlation coefficient are Markedness (\u0394p) and Youden's J statistic (Informedness or \u0394p')."}, {"text": "For example, Spearman's rank correlation coefficient is useful to measure the statistical dependence between the rankings of athletes in two tournaments. And the Kendall rank correlation coefficient is another approach."}]}, {"question": "What is a blob in image processing", "positive_ctxs": [{"text": "The method of analyzing an image that has undergone binarization processing is called \"blob analysis\". A blob refers to a lump. Blob analysis is image processing's most basic method for analyzing the shape features of an object, such as the presence, number, area, position, length, and direction of lumps."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In computer vision, blob detection methods are aimed at detecting regions in a digital image that differ in properties, such as brightness or color, compared to surrounding regions. Informally, a blob is a region of an image in which some properties are constant or approximately constant; all the points in a blob can be considered in some sense to be similar to each other. The most common method for blob detection is convolution."}, {"text": "Usually those methods consist of two parts. The first stage is to detect interest points, fiducial markers or optical flow in the camera images. This step can use feature detection methods like corner detection, blob detection, edge detection or thresholding, and other image processing methods."}, {"text": "The blob descriptors obtained from these blob detectors with automatic scale selection are invariant to translations, rotations and uniform rescalings in the spatial domain. The images that constitute the input to a computer vision system are, however, also subject to perspective distortions. To obtain blob descriptors that are more robust to perspective transformations, a natural approach is to devise a blob detector that is invariant to affine transformations."}, {"text": "Blobs provide a complementary description of image structures in terms of regions, as opposed to corners that are more point-like. Nevertheless, blob descriptors may often contain a preferred point (a local maximum of an operator response or a center of gravity) which means that many blob detectors may also be regarded as interest point operators. Blob detectors can detect areas in an image which are too smooth to be detected by a corner detector."}, {"text": "Consider shrinking an image and then performing corner detection. The detector will respond to points which are sharp in the shrunk image, but may be smooth in the original image. It is at this point that the difference between a corner detector and a blob detector becomes somewhat vague."}, {"text": "for a d-dimensional image) and strong negative responses for bright blobs of similar size. A main problem when applying this operator at a single scale, however, is that the operator response is strongly dependent on the relationship between the size of the blob structures in the image domain and the size of the Gaussian kernel used for pre-smoothing. In order to automatically capture blobs of different (unknown) size in the image domain, a multi-scale approach is therefore necessary."}, {"text": "The motion analysis processing can in the simplest case be to detect motion, i.e., find the points in the image where something is moving. More complex types of processing can be to track a specific object in the image over time, to group points that belong to the same rigid object that is moving in the scene, or to determine the magnitude and direction of the motion of every point in the image. The information that is produced is often related to a specific image in the sequence, corresponding to a specific time-point, but then depends also on the neighboring images."}]}, {"question": "What is the moment of a random variable", "positive_ctxs": [{"text": "The \u201cmoments\u201d of a random variable (or of its distribution) are expected values of powers or related functions of the random variable. The rth moment of X is E(Xr). In particular, the first moment is the mean, \u00b5X = E(X). The mean is a measure of the \u201ccenter\u201d or \u201clocation\u201d of a distribution."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by"}, {"text": "Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by"}, {"text": "Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by"}, {"text": "Let a random variable X have a probability density f(x;\u03b1). The partial derivative with respect to the (unknown, and to be estimated) parameter \u03b1 of the log likelihood function is called the score. The second moment of the score is called the Fisher information:"}, {"text": "A mixed random variable is a random variable whose cumulative distribution function is neither piecewise-constant (a discrete random variable) nor everywhere-continuous. It can be realized as the sum of a discrete random variable and a continuous random variable; in which case the CDF will be the weighted average of the CDFs of the component variables.An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, X = \u22121; otherwise X = the value of the spinner as in the preceding example."}, {"text": "A mixed random variable is a random variable whose cumulative distribution function is neither piecewise-constant (a discrete random variable) nor everywhere-continuous. It can be realized as the sum of a discrete random variable and a continuous random variable; in which case the CDF will be the weighted average of the CDFs of the component variables.An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, X = \u22121; otherwise X = the value of the spinner as in the preceding example."}, {"text": "The mean of a probability distribution is the long-run arithmetic average value of a random variable having that distribution. If the random variable is denoted by"}]}, {"question": "Is regression an algorithm", "positive_ctxs": [{"text": "Linear Regression is a machine learning algorithm based on supervised learning. It performs a regression task. Regression models a target prediction value based on independent variables. Linear regression performs the task to predict a dependent variable value (y) based on a given independent variable (x)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "With extracellular measurement techniques an electrode (or array of several electrodes) is located in the extracellular space. Spikes, often from several spiking sources, depending on the size of the electrode and its proximity to the sources, can be identified with signal processing techniques. Extracellular measurement has several advantages: 1) Is easier to obtain experimentally; 2) Is robust and lasts for a longer time; 3) Can reflect the dominant effect, especially when conducted in an anatomical region with many similar cells."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Linear regression plays an important role in the field of artificial intelligence such as machine learning. The linear regression algorithm is one of the fundamental supervised machine-learning algorithms due to its relative simplicity and well-known properties."}, {"text": "Linear regression plays an important role in the field of artificial intelligence such as machine learning. The linear regression algorithm is one of the fundamental supervised machine-learning algorithms due to its relative simplicity and well-known properties."}]}, {"question": "What is expected value of probability distribution", "positive_ctxs": [{"text": "The expected value (EV) is an anticipated value for an investment at some point in the future. In statistics and probability analysis, the expected value is calculated by multiplying each of the possible outcomes by the likelihood each outcome will occur and then summing all of those values."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The probability distribution of the number of fixed points in a uniformly distributed random permutation approaches a Poisson distribution with expected value 1 as n grows. In particular, it is an elegant application of the inclusion\u2013exclusion principle to show that the probability that there are no fixed points approaches 1/e. When n is big enough, the probability distribution of fixed points is almost the Poisson distribution with expected value 1."}, {"text": "As a result, this formula can be expressed as simply \"the posterior predictive probability of seeing a category is proportional to the total observed count of that category\", or as \"the expected count of a category is the same as the total observed count of the category\", where \"observed count\" is taken to include the pseudo-observations of the prior.The reason for the equivalence between posterior predictive probability and the expected value of the posterior distribution of p is evident with re-examination of the above formula. As explained in the posterior predictive distribution article, the formula for the posterior predictive probability has the form of an expected value taken with respect to the posterior distribution:"}, {"text": ").In probability and statistics, the population mean, or expected value, is a measure of the central tendency either of a probability distribution or of the random variable characterized by that distribution. In a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability p(x), and then adding all these products together, giving"}, {"text": ").In probability and statistics, the population mean, or expected value, is a measure of the central tendency either of a probability distribution or of the random variable characterized by that distribution. In a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability p(x), and then adding all these products together, giving"}, {"text": ").In probability and statistics, the population mean, or expected value, is a measure of the central tendency either of a probability distribution or of the random variable characterized by that distribution. In a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability p(x), and then adding all these products together, giving"}, {"text": ").In probability and statistics, the population mean, or expected value, is a measure of the central tendency either of a probability distribution or of the random variable characterized by that distribution. In a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability p(x), and then adding all these products together, giving"}, {"text": ").In probability and statistics, the population mean, or expected value, is a measure of the central tendency either of a probability distribution or of the random variable characterized by that distribution. In a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability p(x), and then adding all these products together, giving"}]}, {"question": "Why is it useful to track loss while the model is being trained", "positive_ctxs": [{"text": "Loss is often used in the training process to find the \"best\" parameter values for your model (e.g. weights in neural network). Once you find the optimized parameters above, you use this metrics to evaluate how accurate your model's prediction is compared to the true data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Another useful regularization techniques for gradient boosted trees is to penalize model complexity of the learned model. The model complexity can be defined as the proportional number of leaves in the learned trees. The joint optimization of loss and model complexity corresponds to a post-pruning algorithm to remove branches that fail to reduce the loss by a threshold."}, {"text": "Another useful regularization techniques for gradient boosted trees is to penalize model complexity of the learned model. The model complexity can be defined as the proportional number of leaves in the learned trees. The joint optimization of loss and model complexity corresponds to a post-pruning algorithm to remove branches that fail to reduce the loss by a threshold."}, {"text": "Another useful regularization techniques for gradient boosted trees is to penalize model complexity of the learned model. The model complexity can be defined as the proportional number of leaves in the learned trees. The joint optimization of loss and model complexity corresponds to a post-pruning algorithm to remove branches that fail to reduce the loss by a threshold."}, {"text": "indicates that the loss Hessian is resilient to the mini-batch variance, whereas the second term on the right hand side suggests that it becomes smoother when the Hessian and the inner product are non-negative. If the loss is locally convex, then the Hessian is positive semi-definite, while the inner product is positive if"}, {"text": "Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances.A common example involves estimating \"location\". Under typical statistical assumptions, the mean or average is the statistic for estimating location that minimizes the expected loss experienced under the squared-error loss function, while the median is the estimator that minimizes expected loss experienced under the absolute-difference loss function."}, {"text": "Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances.A common example involves estimating \"location\". Under typical statistical assumptions, the mean or average is the statistic for estimating location that minimizes the expected loss experienced under the squared-error loss function, while the median is the estimator that minimizes expected loss experienced under the absolute-difference loss function."}, {"text": "However, this loss function is non-convex and non-smooth, and solving for the optimal solution is an NP-hard combinatorial optimization problem. As a result, it is better to substitute loss function surrogates which are tractable for commonly used learning algorithms, as they have convenient properties such as being convex and smooth. In addition to their computational tractability, one can show that the solutions to the learning problem using these loss surrogates allow for the recovery of the actual solution to the original classification problem."}]}, {"question": "How do you find the mean of a probability distribution", "positive_ctxs": [{"text": "How to find the mean of the probability distribution: StepsStep 1: Convert all the percentages to decimal probabilities. For example: Step 2: Construct a probability distribution table. Step 3: Multiply the values in each column. Step 4: Add the results from step 3 together."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "An analogous formula applies to the case of a continuous probability distribution. Not every probability distribution has a defined mean (see the Cauchy distribution for an example). Moreover, the mean can be infinite for some distributions."}, {"text": "An analogous formula applies to the case of a continuous probability distribution. Not every probability distribution has a defined mean (see the Cauchy distribution for an example). Moreover, the mean can be infinite for some distributions."}, {"text": "An analogous formula applies to the case of a continuous probability distribution. Not every probability distribution has a defined mean (see the Cauchy distribution for an example). Moreover, the mean can be infinite for some distributions."}, {"text": "An analogous formula applies to the case of a continuous probability distribution. Not every probability distribution has a defined mean (see the Cauchy distribution for an example). Moreover, the mean can be infinite for some distributions."}, {"text": "An analogous formula applies to the case of a continuous probability distribution. Not every probability distribution has a defined mean (see the Cauchy distribution for an example). Moreover, the mean can be infinite for some distributions."}, {"text": "The classroom mean score is 96, which is \u22122.47 standard error units from the population mean of 100. Looking up the z-score in a table of the standard normal distribution cumulative probability, we find that the probability of observing a standard normal value below \u22122.47 is approximately 0.5 \u2212 0.4932 = 0.0068. This is the one-sided p-value for the null hypothesis that the 55 students are comparable to a simple random sample from the population of all test-takers."}]}, {"question": "What is the skewness of a chi square distribution", "positive_ctxs": [{"text": "Chi Square distributions are positively skewed, with the degree of skew decreasing with increasing degrees of freedom. As the degrees of freedom increases, the Chi Square distribution approaches a normal distribution. Figure 1 shows density functions for three Chi Square distributions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "The distribution of this statistic is unknown. It is related to a statistic proposed earlier by Pearson \u2013 the difference between the kurtosis and the square of the skewness (vide infra)."}, {"text": "A random variable that is a function of the random sample and of the unknown parameter, but whose probability distribution does not depend on the unknown parameter is called a pivotal quantity or pivot. Widely used pivots include the z-score, the chi square statistic and Student's t-value."}]}, {"question": "What is LDA clustering", "positive_ctxs": [{"text": "LDA is a probabilistic generative model that extracts the thematic structure in a big document collection. The model assumes that every topic is a distribution of words in the vocabulary, and every document (described over the same vocabulary) is a distribution of a small subset of these topics."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is an incremental LDA algorithm, and this idea has been extensively studied over the last two decades. Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features."}, {"text": "For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is an incremental LDA algorithm, and this idea has been extensively studied over the last two decades. Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features."}, {"text": "For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is an incremental LDA algorithm, and this idea has been extensively studied over the last two decades. Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features."}, {"text": "For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is an incremental LDA algorithm, and this idea has been extensively studied over the last two decades. Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features."}, {"text": "For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is an incremental LDA algorithm, and this idea has been extensively studied over the last two decades. Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}]}, {"question": "What is labeling in image processing", "positive_ctxs": [{"text": "Connected components labeling scans an image and groups its pixels into components based on pixel connectivity, i.e. all pixels in a connected component share similar pixel intensity values and are in some way connected with each other."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What kind of graph is used depends on the application. For example, in natural language processing, linear chain CRFs are popular, which implement sequential dependencies in the predictions. In image processing the graph typically connects locations to nearby and/or similar locations to enforce that they receive similar predictions."}, {"text": "Connected-component labeling is used in computer vision to detect connected regions in binary digital images, although color images and data with higher dimensionality can also be processed. When integrated into an image recognition system or human-computer interaction interface, connected component labeling can operate on a variety of information. Blob extraction is generally performed on the resulting binary image from a thresholding step, but it can be applicable to gray-scale and color images as well."}, {"text": "define connected components labeling as the \u201c[c]reation of a labeled image in which the positions associated with the same connected component of the binary input image have a unique label.\u201d Shapiro et al. define CCL as an operator whose \u201cinput is a binary image and [...] output is a symbolic image in which the label assigned to each pixel is an integer uniquely identifying the connected component to which that pixel belongs.\u201dThere is no consensus on the definition of CCA in the academic literature. It is often used interchangeably with CCL."}, {"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}, {"text": "The emergence of FPGAs with enough capacity to perform complex image processing tasks also led to high-performance architectures for connected-component labeling. Most of these architectures utilize the single pass variant of this algorithm, because of the limited memory resources available on an FPGA. These types of connected component labeling architectures are able to process several image pixels in parallel, thereby enabling a high throughput at low processing latency to be achieved."}, {"text": "The motion analysis processing can in the simplest case be to detect motion, i.e., find the points in the image where something is moving. More complex types of processing can be to track a specific object in the image over time, to group points that belong to the same rigid object that is moving in the scene, or to determine the magnitude and direction of the motion of every point in the image. The information that is produced is often related to a specific image in the sequence, corresponding to a specific time-point, but then depends also on the neighboring images."}, {"text": "The median filter is a non-linear digital filtering technique, often used to remove noise from an image or signal. Such noise reduction is a typical pre-processing step to improve the results of later processing (for example, edge detection on an image). Median filtering is very widely used in digital image processing because, under certain conditions, it preserves edges while removing noise (but see the discussion below), also having applications in signal processing."}]}, {"question": "What is output value", "positive_ctxs": [{"text": "When we know an input value and want to determine the corresponding output value for a function, we evaluate the function. When we know an output value and want to determine the input values that would produce that output value, we set the output equal to the function's formula and solve for the input."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The transient response is the maximum allowable output voltage variation for a load current step change. The transient response is a function of the output capacitor value ("}, {"text": "In some sense the 0-1 indicator function is the most natural loss function for classification. It takes the value 0 if the predicted output is the same as the actual output, and it takes the value 1 if the predicted output is different from the actual output."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}]}, {"question": "What is Q function in machine learning", "positive_ctxs": [{"text": "Q-learning is a model-free reinforcement learning algorithm to learn quality of actions telling an agent what action to take under what circumstances. \"Q\" names the function that the algorithm computes with the maximum expected rewards for an action taken in a given state."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}, {"text": "In machine learning, the radial basis function kernel, or RBF kernel, is a popular kernel function used in various kernelized learning algorithms. In particular, it is commonly used in support vector machine classification.The RBF kernel on two samples x and x', represented as feature vectors in some input space, is defined as"}, {"text": "Because the future maximum approximated action value in Q-learning is evaluated using the same Q function as in current action selection policy, in noisy environments Q-learning can sometimes overestimate the action values, slowing the learning. A variant called Double Q-learning was proposed to correct this. Double Q-learning is an off-policy reinforcement learning algorithm, where a different policy is used for value evaluation than what is used to select the next action."}, {"text": "The DeepMind system used a deep convolutional neural network, with layers of tiled convolutional filters to mimic the effects of receptive fields. Reinforcement learning is unstable or divergent when a nonlinear function approximator such as a neural network is used to represent Q. This instability comes from the correlations present in the sequence of observations, the fact that small updates to Q may significantly change the policy and the data distribution, and the correlations between Q and the target values."}, {"text": "Similarity learning is an area of supervised machine learning in artificial intelligence. It is closely related to regression and classification, but the goal is to learn a similarity function that measures how similar or related two objects are. It has applications in ranking, in recommendation systems, visual identity tracking, face verification, and speaker verification."}, {"text": "Similarity learning is an area of supervised machine learning in artificial intelligence. It is closely related to regression and classification, but the goal is to learn a similarity function that measures how similar or related two objects are. It has applications in ranking, in recommendation systems, visual identity tracking, face verification, and speaker verification."}, {"text": "Similarity learning is an area of supervised machine learning in artificial intelligence. It is closely related to regression and classification, but the goal is to learn a similarity function that measures how similar or related two objects are. It has applications in ranking, in recommendation systems, visual identity tracking, face verification, and speaker verification."}]}, {"question": "Which algorithm is used for classification", "positive_ctxs": [{"text": "When most dependent variables are numeric, logistic regression and SVM should be the first try for classification. These models are easy to implement, their parameters easy to tune, and the performances are also pretty good. So these models are appropriate for beginners."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric classification method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. In both cases, the input consists of the k closest training examples in data set."}, {"text": "In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric classification method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. In both cases, the input consists of the k closest training examples in data set."}, {"text": "In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric classification method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. In both cases, the input consists of the k closest training examples in data set."}, {"text": "In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric classification method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. In both cases, the input consists of the k closest training examples in data set."}, {"text": "In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric classification method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. In both cases, the input consists of the k closest training examples in data set."}, {"text": "will generally be small but not necessarily zero. Which of these regimes is more relevant depends on the specific data set at hand."}, {"text": "Which treatment is considered better is determined by an inequality between two ratios (successes/total). The reversal of the inequality between the ratios, which creates Simpson's paradox, happens because two effects occur together:"}]}, {"question": "What are the application of binomial distribution", "positive_ctxs": [{"text": "The binomial distribution model allows us to compute the probability of observing a specified number of \"successes\" when the process is repeated a specific number of times (e.g., in a set of patients) and the outcome for a given patient is either a success or a failure."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The binomial distribution and beta distribution are different views of the same model of repeated Bernoulli trials. The binomial distribution is the PMF of k successes given n independent events each with a probability p of success."}, {"text": "The binomial distribution and beta distribution are different views of the same model of repeated Bernoulli trials. The binomial distribution is the PMF of k successes given n independent events each with a probability p of success."}, {"text": "Because of this, the negative binomial distribution is also known as the gamma\u2013Poisson (mixture) distribution. The negative binomial distribution was originally derived as a limiting case of the gamma-Poisson distribution."}, {"text": "Because of this, the negative binomial distribution is also known as the gamma\u2013Poisson (mixture) distribution. The negative binomial distribution was originally derived as a limiting case of the gamma-Poisson distribution."}, {"text": "The distribution of N thus is the binomial distribution with parameters n and p, where p = 1/2. The mean of the binomial distribution is n/2, and the variance is n/4. This distribution function will be denoted by N(d)."}, {"text": "The classical application of the hypergeometric distribution is sampling without replacement. Think of an urn with two colors of marbles, red and green. Define drawing a green marble as a success and drawing a red marble as a failure (analogous to the binomial distribution)."}, {"text": "The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than n, the binomial distribution remains a good approximation, and is widely used."}]}, {"question": "What is stratified and systematic sampling", "positive_ctxs": [{"text": "Systematic sampling is frequently used to select a specified number of records from a computer file. Stratified sampling is commonly used probability method that is superior to random sampling because it reduces sampling error. A stratum is a subset of the population that share at least one common characteristic."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "OversamplingChoice-based sampling is one of the stratified sampling strategies. In choice-based sampling, the data are stratified on the target and a sample is taken from each stratum so that the rare target class will be more represented in the sample. The model is then built on this biased sample."}, {"text": "OversamplingChoice-based sampling is one of the stratified sampling strategies. In choice-based sampling, the data are stratified on the target and a sample is taken from each stratum so that the rare target class will be more represented in the sample. The model is then built on this biased sample."}, {"text": "A stratified survey could thus claim to be more representative of the population than a survey of simple random sampling or systematic sampling."}, {"text": "A stratified survey could thus claim to be more representative of the population than a survey of simple random sampling or systematic sampling."}, {"text": "A stratified survey could thus claim to be more representative of the population than a survey of simple random sampling or systematic sampling."}]}, {"question": "What is the difference between univariate and multivariate regression", "positive_ctxs": [{"text": "Univariate and multivariate represent two approaches to statistical analysis. Univariate involves the analysis of a single variable while multivariate analysis examines two or more variables. Most multivariate analysis involves a dependent variable and multiple independent variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In mathematics, a univariate object is an expression, equation, function or polynomial involving only one variable. Objects involving more than one variable are multivariate. In some cases the distinction between the univariate and multivariate cases is fundamental; for example, the fundamental theorem of algebra and Euclid's algorithm for polynomials are fundamental properties of univariate polynomials that cannot be generalized to multivariate polynomials."}, {"text": "Multivariate statistics concerns understanding the different aims and background of each of the different forms of multivariate analysis, and how they relate to each other. The practical application of multivariate statistics to a particular problem may involve several types of univariate and multivariate analyses in order to understand the relationships between variables and their relevance to the problem being studied."}, {"text": "Multivariate statistics concerns understanding the different aims and background of each of the different forms of multivariate analysis, and how they relate to each other. The practical application of multivariate statistics to a particular problem may involve several types of univariate and multivariate analyses in order to understand the relationships between variables and their relevance to the problem being studied."}, {"text": "Previously, this article discussed the univariate median, when the sample or population had one-dimension. When the dimension is two or higher, there are multiple concepts that extend the definition of the univariate median; each such multivariate median agrees with the univariate median when the dimension is exactly one."}, {"text": "The key reason for studentizing is that, in regression analysis of a multivariate distribution, the variances of the residuals at different input variable values may differ, even if the variances of the errors at these different input variable values are equal. The issue is the difference between errors and residuals in statistics, particularly the behavior of residuals in regressions."}, {"text": "Like univariate analysis, bivariate analysis can be descriptive or inferential. It is the analysis of the relationship between the two variables. Bivariate analysis is a simple (two variable) special case of multivariate analysis (where multiple relations between multiple variables are examined simultaneously)."}, {"text": "Like univariate analysis, bivariate analysis can be descriptive or inferential. It is the analysis of the relationship between the two variables. Bivariate analysis is a simple (two variable) special case of multivariate analysis (where multiple relations between multiple variables are examined simultaneously)."}]}, {"question": "Where is the mean located in relationship to the median", "positive_ctxs": [{"text": "It appears that the median is always closest to the high point (the mode), while the mean tends to be farther out on the tail. In a symmetrical distribution, the mean and the median are both centrally located close to the high point of the distribution."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "1, 2, 2, 2, 3, 14.The median is 2 in this case, (as is the mode), and it might be seen as a better indication of the center than the arithmetic mean of 4, which is larger than all-but-one of the values. However, the widely cited empirical relationship that the mean is shifted \"further into the tail\" of a distribution than the median is not generally true. At most, one can say that the two statistics cannot be \"too far\" apart; see \u00a7 Inequality relating means and medians below.As a median is based on the middle data in a set, it is not necessary to know the value of extreme results in order to calculate it."}, {"text": "The mean absolute deviation from the median is less than or equal to the mean absolute deviation from the mean. In fact, the mean absolute deviation from the median is always less than or equal to the mean absolute deviation from any other fixed number."}, {"text": "For example, in a psychology test investigating the time needed to solve a problem, if a small number of people failed to solve the problem at all in the given time a median can still be calculated.Because the median is simple to understand and easy to calculate, while also a robust approximation to the mean, the median is a popular summary statistic in descriptive statistics. In this context, there are several choices for a measure of variability: the range, the interquartile range, the mean absolute deviation, and the median absolute deviation."}, {"text": "For example, in the distribution of adult residents across US households, the skew is to the right. However, since the majority of cases is less than or equal to the mode, which is also the median, the mean sits in the heavier left tail. As a result, the rule of thumb that the mean is right of the median under right skew failed."}, {"text": "For example, in the distribution of adult residents across US households, the skew is to the right. However, since the majority of cases is less than or equal to the mode, which is also the median, the mean sits in the heavier left tail. As a result, the rule of thumb that the mean is right of the median under right skew failed."}, {"text": "the efficiency is higher than this (for example, a sample size of 3 gives an efficiency of about 74%).The sample mean is thus more efficient than the sample median in this example. However, there may be measures by which the median performs better. For example, the median is far more robust to outliers, so that if the Gaussian model is questionable or approximate, there may advantages to using the median (see Robust statistics)."}, {"text": "the efficiency is higher than this (for example, a sample size of 3 gives an efficiency of about 74%).The sample mean is thus more efficient than the sample median in this example. However, there may be measures by which the median performs better. For example, the median is far more robust to outliers, so that if the Gaussian model is questionable or approximate, there may advantages to using the median (see Robust statistics)."}]}, {"question": "What are the assumptions of discriminant analysis", "positive_ctxs": [{"text": "Assumptions. The assumptions of discriminant analysis are the same as those for MANOVA. The analysis is quite sensitive to outliers and the size of the smallest group must be larger than the number of predictor variables. Multivariate normality: Independent variables are normal for each level of the grouping variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The assumptions of discriminant analysis are the same as those for MANOVA. The analysis is quite sensitive to outliers and the size of the smallest group must be larger than the number of predictor variables."}, {"text": "The assumptions of discriminant analysis are the same as those for MANOVA. The analysis is quite sensitive to outliers and the size of the smallest group must be larger than the number of predictor variables."}, {"text": "The assumptions of discriminant analysis are the same as those for MANOVA. The analysis is quite sensitive to outliers and the size of the smallest group must be larger than the number of predictor variables."}, {"text": "The assumptions of discriminant analysis are the same as those for MANOVA. The analysis is quite sensitive to outliers and the size of the smallest group must be larger than the number of predictor variables."}, {"text": "The assumptions of discriminant analysis are the same as those for MANOVA. The analysis is quite sensitive to outliers and the size of the smallest group must be larger than the number of predictor variables."}, {"text": "Unlike logistic regression, discriminant analysis can be used with small sample sizes. It has been shown that when sample sizes are equal, and homogeneity of variance/covariance holds, discriminant analysis is more accurate. Despite all these advantages, logistic regression has none-the-less become the common choice, since the assumptions of discriminant analysis are rarely met."}, {"text": "Unlike logistic regression, discriminant analysis can be used with small sample sizes. It has been shown that when sample sizes are equal, and homogeneity of variance/covariance holds, discriminant analysis is more accurate. Despite all these advantages, logistic regression has none-the-less become the common choice, since the assumptions of discriminant analysis are rarely met."}]}, {"question": "When would you use a Wilcoxon rank sum test", "positive_ctxs": [{"text": "The Mann Whitney U test, sometimes called the Mann Whitney Wilcoxon Test or the Wilcoxon Rank Sum Test, is used to test whether two samples are likely to derive from the same population (i.e., that the two populations have the same shape)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The logrank statistic can be used when observations are censored. If censored observations are not present in the data then the Wilcoxon rank sum test is appropriate."}, {"text": "The logrank statistic can be used when observations are censored. If censored observations are not present in the data then the Wilcoxon rank sum test is appropriate."}, {"text": "In a single paper in 1945, Frank Wilcoxon proposed both the one-sample signed rank and the two-sample rank sum test, in a test of significance with a point null-hypothesis against its complementary alternative (that is, equal versus not equal). However, he only tabulated a few points for the equal-sample size case in that paper (though in a later paper he gave larger tables)."}, {"text": "In a single paper in 1945, Frank Wilcoxon proposed both the one-sample signed rank and the two-sample rank sum test, in a test of significance with a point null-hypothesis against its complementary alternative (that is, equal versus not equal). However, he only tabulated a few points for the equal-sample size case in that paper (though in a later paper he gave larger tables)."}, {"text": "Imagine you have a cluster of news articles on a particular event, and you want to produce one summary. Each article is likely to have many similar sentences, and you would only want to include distinct ideas in the summary. To address this issue, LexRank applies a heuristic post-processing step that builds up a summary by adding sentences in rank order, but discards any sentences that are too similar to ones already placed in the summary."}, {"text": "The Mann\u2013Whitney U test / Wilcoxon rank-sum test is not the same as the Wilcoxon signed-rank test, although both are nonparametric and involve summation of ranks. The Mann\u2013Whitney U test is applied to independent samples. The Wilcoxon signed-rank test is applied to matched or dependent samples."}, {"text": "The Mann\u2013Whitney U test / Wilcoxon rank-sum test is not the same as the Wilcoxon signed-rank test, although both are nonparametric and involve summation of ranks. The Mann\u2013Whitney U test is applied to independent samples. The Wilcoxon signed-rank test is applied to matched or dependent samples."}]}, {"question": "What is meant by linear regression", "positive_ctxs": [{"text": "Linear regression attempts to model the relationship between two variables by fitting a linear equation to observed data. A linear regression line has an equation of the form Y = a + bX, where X is the explanatory variable and Y is the dependent variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}]}, {"question": "What do you mean by descriptive statistics", "positive_ctxs": [{"text": "Descriptive statistics are used to describe the basic features of the data in a study. They provide simple summaries about the sample and the measures. Descriptive statistics are typically distinguished from inferential statistics. With descriptive statistics you are simply describing what is or what the data shows."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Some informative descriptive statistics, such as the sample range, do not make good test statistics since it is difficult to determine their sampling distribution."}, {"text": "Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being distribution-free or having a specified distribution but with the distribution's parameters unspecified. Nonparametric statistics includes both descriptive statistics and statistical inference."}, {"text": "Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being distribution-free or having a specified distribution but with the distribution's parameters unspecified. Nonparametric statistics includes both descriptive statistics and statistical inference."}, {"text": "Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being distribution-free or having a specified distribution but with the distribution's parameters unspecified. Nonparametric statistics includes both descriptive statistics and statistical inference."}, {"text": "When a sample consists of more than one variable, descriptive statistics may be used to describe the relationship between pairs of variables. In this case, descriptive statistics include:"}, {"text": "A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent."}]}, {"question": "How do you interpret odds ratio", "positive_ctxs": [{"text": "Odds Ratio is a measure of the strength of association with an exposure and an outcome.OR > 1 means greater odds of association with the exposure and outcome.OR = 1 means there is no association between exposure and outcome.OR < 1 means there is a lower odds of association between the exposure and outcome."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The simplest measure of association for a 2 \u00d7 2 contingency table is the odds ratio. Given two events, A and B, the odds ratio is defined as the ratio of the odds of A in the presence of B and the odds of A in the absence of B, or equivalently (due to symmetry), the ratio of the odds of B in the presence of A and the odds of B in the absence of A. Two events are independent if and only if the odds ratio is 1; if the odds ratio is greater than 1, the events are positively associated; if the odds ratio is less than 1, the events are negatively associated."}, {"text": "The simplest measure of association for a 2 \u00d7 2 contingency table is the odds ratio. Given two events, A and B, the odds ratio is defined as the ratio of the odds of A in the presence of B and the odds of A in the absence of B, or equivalently (due to symmetry), the ratio of the odds of B in the presence of A and the odds of B in the absence of A. Two events are independent if and only if the odds ratio is 1; if the odds ratio is greater than 1, the events are positively associated; if the odds ratio is less than 1, the events are negatively associated."}, {"text": "The simplest measure of association for a 2 \u00d7 2 contingency table is the odds ratio. Given two events, A and B, the odds ratio is defined as the ratio of the odds of A in the presence of B and the odds of A in the absence of B, or equivalently (due to symmetry), the ratio of the odds of B in the presence of A and the odds of B in the absence of A. Two events are independent if and only if the odds ratio is 1; if the odds ratio is greater than 1, the events are positively associated; if the odds ratio is less than 1, the events are negatively associated."}, {"text": "In this case, the odds ratio equals one, and conversely the odds ratio can only equal one if the joint probabilities can be factored in this way. Thus the odds ratio equals one if and only if X and Y are independent."}, {"text": "An odds ratio (OR) is a statistic that quantifies the strength of the association between two events, A and B. The odds ratio is defined as the ratio of the odds of A in the presence of B and the odds of A in the absence of B, or equivalently (due to symmetry), the ratio of the odds of B in the presence of A and the odds of B in the absence of A. Two events are independent if and only if the OR equals 1, i.e., the odds of one event are the same in either the presence or absence of the other event."}, {"text": "One approach to inference uses large sample approximations to the sampling distribution of the log odds ratio (the natural logarithm of the odds ratio). If we use the joint probability notation defined above, the population log odds ratio is"}]}, {"question": "How do you use a named entity recognition", "positive_ctxs": [{"text": "Named Entity Recognition can automatically scan entire articles and reveal which are the major people, organizations, and places discussed in them. Knowing the relevant tags for each article help in automatically categorizing the articles in defined hierarchies and enable smooth content discovery."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The F-score is also used in machine learning. However, the F-measures do not take true negatives into account, hence measures such as the Matthews correlation coefficient, Informedness or Cohen's kappa may be preferred to assess the performance of a binary classifier.The F-score has been widely used in the natural language processing literature, such as in the evaluation of named entity recognition and word segmentation."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Named entity recognition (NER) \u2013 given a stream of text, determines which items in the text map to proper names, such as people or places, and what the type of each such name is (e.g. Although capitalization can aid in recognizing named entities in languages such as English, this information cannot aid in determining the type of named entity, and in any case is often inaccurate or insufficient. For example, the first word of a sentence is also capitalized, and named entities often span several words, only some of which are capitalized."}]}, {"question": "What is an agent artificial intelligence", "positive_ctxs": [{"text": "In artificial intelligence, an intelligent agent (IA) refers to an autonomous entity which acts, directing its activity towards achieving goals (i.e. it is an agent), upon an environment using observation through sensors and consequent actuators (i.e. it is intelligent)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "Action selection is a way of characterizing the most basic problem of intelligent systems: what to do next. In artificial intelligence and computational cognitive science, \"the action selection problem\" is typically associated with intelligent agents and animats\u2014artificial systems that exhibit complex behaviour in an agent environment. The term is also sometimes used in ethology or animal behavior."}, {"text": "Artificial intelligence (or AI) is both the intelligence that is demonstrated by machines and the branch of computer science which aims to create it, through \"the study and design of intelligent agents\" or \"rational agents\", where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. Kaplan and Haenlein define artificial intelligence as \u201ca system\u2019s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation\u201d. Achievements in artificial intelligence include constrained and well-defined problems such as games, crossword-solving and optical character recognition and a few more general problems such as autonomous cars."}, {"text": "The learning system here is similar to any other neural styled networks, which is through modifying the connection strength between the demons; in other words, how the demons respond to each other's yelling. This multiple agent approach to human information processing became the assumption for many modern artificial intelligence systems."}, {"text": "Are there limits to how intelligent machines\u2014or human-machine hybrids\u2014can be? A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. Superintelligence may also refer to the form or degree of intelligence possessed by such an agent."}, {"text": "Are there limits to how intelligent machines\u2014or human-machine hybrids\u2014can be? A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. Superintelligence may also refer to the form or degree of intelligence possessed by such an agent."}]}, {"question": "How do you make a predictive model in R", "positive_ctxs": [{"text": "Clean, augment, and preprocess the data into a convenient form, if needed. Conduct an exploratory analysis of the data to get a better sense of it. Using what you find as a guide, construct a model of some aspect of the data. Use the model to answer the question you started with, and validate your results."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "It is a common practice to use a one-tailed hypothesis by default. However, \"If you do not have a specific direction firmly in mind in advance, use a two-sided alternative. Moreover, some users of statistics argue that we should always work with the two-sided alternative."}, {"text": "A fundamental objection is that ANNs do not sufficiently reflect neuronal function. Backpropagation is a critical step, although no such mechanism exists in biological neural networks. How information is coded by real neurons is not known."}]}, {"question": "How do you calculate latent variables", "positive_ctxs": [{"text": "On a technical note, estimation of a latent variable is done by analyzing the variance and covariance of the indicators. The measurement model of a latent variable with effect indicators is the set of relationships (modeled as equations) in which the latent variable is set as the predictor of the indicators."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "In factor analysis and latent trait analysis the latent variables are treated as continuous normally distributed variables, and in latent profile analysis and latent class analysis as from a multinomial distribution. The manifest variables in factor analysis and latent profile analysis are continuous and in most cases, their conditional distribution given the latent variables is assumed to be normal. In latent trait analysis and latent class analysis, the manifest variables are discrete."}, {"text": "In factor analysis and latent trait analysis the latent variables are treated as continuous normally distributed variables, and in latent profile analysis and latent class analysis as from a multinomial distribution. The manifest variables in factor analysis and latent profile analysis are continuous and in most cases, their conditional distribution given the latent variables is assumed to be normal. In latent trait analysis and latent class analysis, the manifest variables are discrete."}, {"text": "can usually be simplified into a function of the fixed hyperparameters of the prior distributions over the latent variables and of expectations (and sometimes higher moments such as the variance) of latent variables not in the current partition (i.e. latent variables not included in"}]}, {"question": "Whats the difference between an F Test and T Test", "positive_ctxs": [{"text": "T - test is used to if the means of two populations are equal (assuming similar variance) whereas F-test is used to test if the variances of two populations are equal. F - test can also be extended to check whether the means of three or more groups are different or not (ANOVA F-test)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Analyze and interpret the data. Test hypotheses advanced in step 1 and draw conclusions about the content represented in the dataset."}, {"text": "Testing and Test Control Notation (TTCN), both TTCN-2 and TTCN-3, follows actor model rather closely. In TTCN actor is a test component: either parallel test component (PTC) or main test component (MTC). Test components can send and receive messages to and from remote partners (peer test components or test system interface), the latter being identified by its address."}, {"text": "Test input data should infect the program state by causing different program states for the mutant and the original program. For example, a test with a = 1 and b = 0 would do this."}, {"text": "As part of the requirements phase, the reliability engineer develops a test strategy with the customer. The test strategy makes trade-offs between the needs of the reliability organization, which wants as much data as possible, and constraints such as cost, schedule and available resources. Test plans and procedures are developed for each reliability test, and results are documented."}, {"text": "orthogonal loading matrices; and matrices E and F are the error terms, assumed to be independent and identically distributed random normal variables. The decompositions of X and Y are made so as to maximise the covariance between T and U."}, {"text": "orthogonal loading matrices; and matrices E and F are the error terms, assumed to be independent and identically distributed random normal variables. The decompositions of X and Y are made so as to maximise the covariance between T and U."}, {"text": "This is called killing the mutant. Test suites are measured by the percentage of mutants that they kill. New tests can be designed to kill additional mutants."}]}, {"question": "What is significance level in backward elimination", "positive_ctxs": [{"text": "The first step in backward elimination is pretty simple, you just select a significance level, or select the P-value. Usually, in most cases, a 5% significance level is selected. This means the P-value will be 0.05. You can change this value depending on the project."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Hypothesis testing provides a means of finding test statistics used in significance testing. The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used in sample size determination. The two methods remain philosophically distinct."}, {"text": "Hypothesis testing provides a means of finding test statistics used in significance testing. The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used in sample size determination. The two methods remain philosophically distinct."}, {"text": "Hypothesis testing provides a means of finding test statistics used in significance testing. The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used in sample size determination. The two methods remain philosophically distinct."}, {"text": "Hypothesis testing provides a means of finding test statistics used in significance testing. The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used in sample size determination. The two methods remain philosophically distinct."}, {"text": "Hypothesis testing provides a means of finding test statistics used in significance testing. The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used in sample size determination. The two methods remain philosophically distinct."}, {"text": "Hypothesis testing provides a means of finding test statistics used in significance testing. The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used in sample size determination. The two methods remain philosophically distinct."}, {"text": "Hypothesis testing provides a means of finding test statistics used in significance testing. The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used in sample size determination. The two methods remain philosophically distinct."}]}, {"question": "What is the difference between SVM and neural networks", "positive_ctxs": [{"text": "An SVM possesses a number of parameters that increase linearly with the linear increase in the size of the input. A NN, on the other hand, doesn't. Even though here we focused especially on single-layer networks, a neural network can have as many layers as we want."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "From this perspective, SVM is closely related to other fundamental classification algorithms such as regularized least-squares and logistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with the square-loss,"}, {"text": "From this perspective, SVM is closely related to other fundamental classification algorithms such as regularized least-squares and logistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with the square-loss,"}, {"text": "From this perspective, SVM is closely related to other fundamental classification algorithms such as regularized least-squares and logistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with the square-loss,"}, {"text": "From this perspective, SVM is closely related to other fundamental classification algorithms such as regularized least-squares and logistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with the square-loss,"}, {"text": "From this perspective, SVM is closely related to other fundamental classification algorithms such as regularized least-squares and logistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with the square-loss,"}, {"text": "In psychophysical terms, the size difference between A and C is above the just noticeable difference ('jnd') while the size differences between A and B and B and C are below the jnd."}, {"text": "What is the underlying framework used to represent knowledge? Semantic networks were one of the first knowledge representation primitives. Also, data structures and algorithms for general fast search."}]}, {"question": "What does variance mean in at test", "positive_ctxs": [{"text": "Variance (\u03c32) in statistics is a measurement of the spread between numbers in a data set. That is, it measures how far each number in the set is from the mean and therefore from every other number in the set."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If the mean is determined in some other way than from the same samples used to estimate the variance then this bias does not arise and the variance can safely be estimated as that of the samples about the (independently known) mean."}, {"text": "If the mean is determined in some other way than from the same samples used to estimate the variance then this bias does not arise and the variance can safely be estimated as that of the samples about the (independently known) mean."}, {"text": "If the mean is determined in some other way than from the same samples used to estimate the variance then this bias does not arise and the variance can safely be estimated as that of the samples about the (independently known) mean."}, {"text": "The semivariance is calculated in the same manner as the variance but only those observations that fall below the mean are included in the calculation:It is sometimes described as a measure of downside risk in an investments context. For skewed distributions, the semivariance can provide additional information that a variance does not.For inequalities associated with the semivariance, see Chebyshev's inequality \u00a7 Semivariances."}, {"text": "The semivariance is calculated in the same manner as the variance but only those observations that fall below the mean are included in the calculation:It is sometimes described as a measure of downside risk in an investments context. For skewed distributions, the semivariance can provide additional information that a variance does not.For inequalities associated with the semivariance, see Chebyshev's inequality \u00a7 Semivariances."}, {"text": "The semivariance is calculated in the same manner as the variance but only those observations that fall below the mean are included in the calculation:It is sometimes described as a measure of downside risk in an investments context. For skewed distributions, the semivariance can provide additional information that a variance does not.For inequalities associated with the semivariance, see Chebyshev's inequality \u00a7 Semivariances."}, {"text": "That is, the variance of the mean decreases when n increases. This formula for the variance of the mean is used in the definition of the standard error of the sample mean, which is used in the central limit theorem."}]}, {"question": "Is the sample mean a consistent estimator", "positive_ctxs": [{"text": "The sample mean is a consistent estimator for the population mean. A consistent estimate has insignificant errors (variations) as sample sizes grow larger. More specifically, the probability that those errors will vary by more than a given amount approaches zero as the sample size increases."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The data set contains two outliers, which greatly influence the sample mean. (The sample mean need not be a consistent estimator for any population mean, because no mean need exist for a heavy-tailed distribution.) A well-defined and robust statistic for central tendency is the sample median, which is consistent and median-unbiased for the population median."}, {"text": "For each random variable, the sample mean is a good estimator of the population mean, where a \"good\" estimator is defined as being efficient and unbiased. Of course the estimator will likely not be the true value of the population mean since different samples drawn from the same distribution will give different sample means and hence different estimates of the true mean. Thus the sample mean is a random variable, not a constant, and consequently has its own distribution."}, {"text": "Efficiency of an estimator may change significantly if the distribution changes, often dropping. This is one of the motivations of robust statistics \u2013 an estimator such as the sample mean is an efficient estimator of the population mean of a normal distribution, for example, but can be an inefficient estimator of a mixture distribution of two normal distributions with the same mean and different variances. For example, if a distribution is a combination of 98% N(\u03bc, \u03c3) and 2% N(\u03bc, 10\u03c3), the presence of extreme values from the latter distribution (often \"contaminating outliers\") significantly reduces the efficiency of the sample mean as an estimator of \u03bc."}, {"text": "Efficiency of an estimator may change significantly if the distribution changes, often dropping. This is one of the motivations of robust statistics \u2013 an estimator such as the sample mean is an efficient estimator of the population mean of a normal distribution, for example, but can be an inefficient estimator of a mixture distribution of two normal distributions with the same mean and different variances. For example, if a distribution is a combination of 98% N(\u03bc, \u03c3) and 2% N(\u03bc, 10\u03c3), the presence of extreme values from the latter distribution (often \"contaminating outliers\") significantly reduces the efficiency of the sample mean as an estimator of \u03bc."}, {"text": "Efficiency of an estimator may change significantly if the distribution changes, often dropping. This is one of the motivations of robust statistics \u2013 an estimator such as the sample mean is an efficient estimator of the population mean of a normal distribution, for example, but can be an inefficient estimator of a mixture distribution of two normal distributions with the same mean and different variances. For example, if a distribution is a combination of 98% N(\u03bc, \u03c3) and 2% N(\u03bc, 10\u03c3), the presence of extreme values from the latter distribution (often \"contaminating outliers\") significantly reduces the efficiency of the sample mean as an estimator of \u03bc."}, {"text": "This estimator has mean \u03b8 and variance of \u03c32 / n, which is equal to the reciprocal of the Fisher information from the sample. Thus, the sample mean is a finite-sample efficient estimator for the mean of the normal distribution."}, {"text": "This estimator has mean \u03b8 and variance of \u03c32 / n, which is equal to the reciprocal of the Fisher information from the sample. Thus, the sample mean is a finite-sample efficient estimator for the mean of the normal distribution."}]}, {"question": "What is difference between standard deviation and standard error", "positive_ctxs": [{"text": "The standard deviation (SD) measures the amount of variability, or dispersion, from the individual data values to the mean, while the standard error of the mean (SEM) measures how far the sample mean of the data is likely to be from the true population mean. The SEM is always smaller than the SD."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "The standard deviation of a population or sample and the standard error of a statistic (e.g., of the sample mean) are quite different, but related. The sample mean's standard error is the standard deviation of the set of means that would be found by drawing an infinite number of repeated samples from the population and computing a mean for each sample. The mean's standard error turns out to equal the population standard deviation divided by the square root of the sample size, and is estimated by using the sample standard deviation divided by the square root of the sample size."}, {"text": "The standard deviation of a population or sample and the standard error of a statistic (e.g., of the sample mean) are quite different, but related. The sample mean's standard error is the standard deviation of the set of means that would be found by drawing an infinite number of repeated samples from the population and computing a mean for each sample. The mean's standard error turns out to equal the population standard deviation divided by the square root of the sample size, and is estimated by using the sample standard deviation divided by the square root of the sample size."}, {"text": "-th feature is computed by averaging the difference in out-of-bag error before and after the permutation over all trees. The score is normalized by the standard deviation of these differences."}, {"text": "In inter-laboratory experiments, a concentration or other quantity of a chemical substance is measured repeatedly in different laboratories to assess the variability of the measurements. Then, the standard deviation of the difference between two values obtained within the same laboratory is called repeatability. The standard deviation for the difference between two measurement from different laboratories is called reproducibility."}]}, {"question": "What is a linear growth curve", "positive_ctxs": [{"text": "Linear Growth Model Organisms generally grow in spurts that are dependent on both environment and genetics. Under controlled laboratory conditions, however, one can often observe a constant rate of growth. These periods of constant growth are often referred to as the linear portions of the growth curve."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The logistic growth curve depicts how population growth rate and the carrying capacity are inter-connected. As illustrated in the logistic growth curve model, when the population size is small, the population increases exponentially. However, as population size nears the carrying capacity, the growth decreases and reaches zero at K.What determines a specific system's carrying capacity involves a limiting factor which may be something such as available supplies of food, water, nesting areas, space or amount of waste that can be absorbed."}, {"text": "His growth model is preceded by a discussion of arithmetic growth and geometric growth (whose curve he calls a logarithmic curve, instead of the modern term exponential curve), and thus \"logistic growth\" is presumably named by analogy, logistic being from Ancient Greek: \u03bb\u03bf\u03b3\u1fd0\u03c3\u03c4\u1fd0\u03ba\u03cc\u03c2, romanized: logistik\u00f3s, a traditional division of Greek mathematics. The term is unrelated to the military and management term logistics, which is instead from French: logis \"lodgings\", though some believe the Greek term also influenced logistics; see Logistics \u00a7 Origin for details."}, {"text": "His growth model is preceded by a discussion of arithmetic growth and geometric growth (whose curve he calls a logarithmic curve, instead of the modern term exponential curve), and thus \"logistic growth\" is presumably named by analogy, logistic being from Ancient Greek: \u03bb\u03bf\u03b3\u1fd0\u03c3\u03c4\u1fd0\u03ba\u03cc\u03c2, romanized: logistik\u00f3s, a traditional division of Greek mathematics. The term is unrelated to the military and management term logistics, which is instead from French: logis \"lodgings\", though some believe the Greek term also influenced logistics; see Logistics \u00a7 Origin for details."}, {"text": "The sRGB color space standard used with most cameras, PCs, and printers does not use a simple power-law nonlinearity as above, but has a decoding gamma value near 2.2 over much of its range, as shown in the plot to the right. Below a compressed value of 0.04045 or a linear intensity of 0.00313, the curve is linear (encoded value proportional to intensity), so \u03b3 = 1. The dashed black curve behind the red curve is a standard \u03b3 = 2.2 power-law curve, for comparison."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}]}, {"question": "What is forget gate in Lstm", "positive_ctxs": [{"text": "The input gate controls the extent to which a new value flows into the cell, the forget gate controls the extent to which a value remains in the cell and the output gate controls the extent to which the value in the cell is used to compute the output activation of the LSTM unit."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The minimal gated unit is similar to the fully gated unit, except the update and reset gate vector is merged into a forget gate. This also implies that the equation for the output vector must be changed:"}, {"text": "A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell remembers values over arbitrary time intervals and the three gates regulate the flow of information into and out of the cell."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}, {"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}, {"text": "1995-1997: LSTM was proposed by Sepp Hochreiter and J\u00fcrgen Schmidhuber. By introducing Constant Error Carousel (CEC) units, LSTM deals with the vanishing gradient problem. The initial version of LSTM block included cells, input and output gates.1999: Felix Gers and his advisor J\u00fcrgen Schmidhuber and Fred Cummins introduced the forget gate (also called \u201ckeep gate\u201d) into LSTM architecture,"}]}, {"question": "Is Z score the test statistic", "positive_ctxs": [{"text": "The Z score is a test of statistical significance that helps you decide whether or not to reject the null hypothesis. The p-value is the probability that you have falsely rejected the null hypothesis. Z scores are measures of standard deviation. Both statistics are associated with the standard normal distribution."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Conditional logistic regression is more general than the CMH test as it can handle continuous variable and perform multivariate analysis. When the CMH test can be applied, the CMH test statistic and the score test statistic of the conditional logistic regression are identical."}, {"text": "Conditional logistic regression is more general than the CMH test as it can handle continuous variable and perform multivariate analysis. When the CMH test can be applied, the CMH test statistic and the score test statistic of the conditional logistic regression are identical."}, {"text": "The logrank statistic can be derived as the score test for the Cox proportional hazards model comparing two groups. It is therefore asymptotically equivalent to the likelihood ratio test statistic based from that model."}, {"text": "The logrank statistic can be derived as the score test for the Cox proportional hazards model comparing two groups. It is therefore asymptotically equivalent to the likelihood ratio test statistic based from that model."}, {"text": "Since the score is a function of the observations that are subject to sampling error, it lends itself to a test statistic known as score test in which the parameter is held at a particular value. Further, the ratio of two likelihood functions evaluated at two distinct parameter values can be understood as a definite integral of the score function."}, {"text": "Consequential \u2013 What are the potential risks if the scores are invalid or inappropriately interpreted? Is the test still worthwhile given the risks?"}, {"text": "Consequential \u2013 What are the potential risks if the scores are invalid or inappropriately interpreted? Is the test still worthwhile given the risks?"}]}, {"question": "What does it mean if a test is sensitive but not specific", "positive_ctxs": [{"text": "Sensitivity refers to a test's ability to designate an individual with disease as positive. A highly sensitive test means that there are few false negative results, and thus fewer cases of disease are missed. The specificity of a test is its ability to designate an individual who does not have a disease as negative."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "follows the standard normal distribution N(0,1), then the rejection of this null hypothesis could mean that (i) the mean is not 0, or (ii) the variance is not 1, or (iii) the distribution is not normal. Different tests of the same null hypothesis would be more or less sensitive to different alternatives. Anyway, if we do manage to reject the null hypothesis, even if we know the distribution is normal and variance is 1, the null hypothesis test does not tell us which non-zero values of the mean are now most plausible."}, {"text": "Face validity is an estimate of whether a test appears to measure a certain criterion; it does not guarantee that the test actually measures phenomena in that domain. Measures may have high validity, but when the test does not appear to be measuring what it is, it has low face validity. Indeed, when a test is subject to faking (malingering), low face validity might make the test more valid."}, {"text": "Suppose B is larger than A, but it is not discernible without an extremely sensitive scale. Further suppose C is larger than B, but this also is not discernible without an extremely sensitive scale. However, the difference in sizes between apples A and C is large enough that you can discern that C is larger than A without a sensitive scale."}, {"text": "While reliability does not imply validity, reliability does place a limit on the overall validity of a test. A test that is not perfectly reliable cannot be perfectly valid, either as a means of measuring attributes of a person or as a means of predicting scores on a criterion. While a reliable test may provide useful valid information, a test that is not reliable cannot possibly be valid.For example, if a set of weighing scales consistently measured the weight of an object as 500 grams over the true weight, then the scale would be very reliable, but it would not be valid (as the returned weight is not the true weight)."}, {"text": "When the mean value is close to zero, the coefficient of variation will approach infinity and is therefore sensitive to small changes in the mean. This is often the case if the values do not originate from a ratio scale."}, {"text": "Sometimes a failure is planned and expected but does not occur: operator error, equipment malfunction, test anomaly, etc. The test result was not the desired time-to-failure but can be (and should be) used as a time-to-termination. The use of censored data is unintentional but necessary."}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}]}, {"question": "Is a matrix an operator", "positive_ctxs": [{"text": "A matrix is a linear operator acting on the vector space of column vectors. Per linear algebra and its isomorphism theorems, any vector space is isomorphic to any other vector space of the same dimension. As such, matrices can be seen as representations of linear operators subject to some basis of column vectors."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations."}, {"text": "Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations."}, {"text": "The sum of the k largest singular values of M is a matrix norm, the Ky Fan k-norm of M.The first of the Ky Fan norms, the Ky Fan 1-norm, is the same as the operator norm of M as a linear operator with respect to the Euclidean norms of Km and Kn. In other words, the Ky Fan 1-norm is the operator norm induced by the standard \u21132 Euclidean inner product. For this reason, it is also called the operator 2-norm."}, {"text": "The sum of the k largest singular values of M is a matrix norm, the Ky Fan k-norm of M.The first of the Ky Fan norms, the Ky Fan 1-norm, is the same as the operator norm of M as a linear operator with respect to the Euclidean norms of Km and Kn. In other words, the Ky Fan 1-norm is the operator norm induced by the standard \u21132 Euclidean inner product. For this reason, it is also called the operator 2-norm."}, {"text": "Diffusion maps leverages the relationship between heat diffusion and a random walk (Markov Chain); an analogy is drawn between the diffusion operator on a manifold and a Markov transition matrix operating on functions defined on the graph whose nodes were sampled from the manifold. In particular, let a data set be represented by"}, {"text": "In Python with the NumPy numerical library or the SymPy symbolic library, multiplication of array objects as a1*a2 produces the Hadamard product, but otherwise multiplication as a1@a2 or matrix objects m1*m2 will produce a matrix product. The Eigen C++ library provides a cwiseProduct member function for the Matrix class (a.cwiseProduct(b)), while the Armadillo library uses the operator % to make compact expressions (a % b; a * b is a matrix product)."}, {"text": ".This processing step for suppressing responses at edges is a transfer of a corresponding approach in the Harris operator for corner detection. The difference is that the measure for thresholding is computed from the Hessian matrix instead of a second-moment matrix."}]}, {"question": "How does standard deviation change with sample size", "positive_ctxs": [{"text": "The population mean of the distribution of sample means is the same as the population mean of the distribution being sampled from. Thus as the sample size increases, the standard deviation of the means decreases; and as the sample size decreases, the standard deviation of the sample means increases."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The formula for the population standard deviation (of a finite population) can be applied to the sample, using the size of the sample as the size of the population (though the actual population size from which the sample is drawn may be much larger). This estimator, denoted by sN, is known as the uncorrected sample standard deviation, or sometimes the standard deviation of the sample (considered as the entire population), and is defined as follows:"}, {"text": "The formula for the population standard deviation (of a finite population) can be applied to the sample, using the size of the sample as the size of the population (though the actual population size from which the sample is drawn may be much larger). This estimator, denoted by sN, is known as the uncorrected sample standard deviation, or sometimes the standard deviation of the sample (considered as the entire population), and is defined as follows:"}, {"text": "When the sample size is small, using the standard deviation of the sample instead of the true standard deviation of the population will tend to systematically underestimate the population standard deviation, and therefore also the standard error. With n = 2, the underestimate is about 25%, but for n = 6, the underestimate is only 5%. Gurland and Tripathi (1971) provide a correction and equation for this effect."}, {"text": "When the sample size is small, using the standard deviation of the sample instead of the true standard deviation of the population will tend to systematically underestimate the population standard deviation, and therefore also the standard error. With n = 2, the underestimate is about 25%, but for n = 6, the underestimate is only 5%. Gurland and Tripathi (1971) provide a correction and equation for this effect."}]}, {"question": "When should I use high pass filter", "positive_ctxs": [{"text": "If you are broadcasting or reinforcing sound outside, and even your best windscreen can't keep out the persistent low-frequency rumble from wind noise, then stopping it right at the source may be your best option. Highpass filters are excellent for this application."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A popular circuit implementing a second order active R-C filter is the Sallen-Key design, whose schematic diagram is shown here. This topology can be adapted to produce low-pass, band-pass, and high pass filters."}, {"text": "An alternative to the RTS algorithm is the modified Bryson\u2013Frazier (MBF) fixed interval smoother developed by Bierman. This also uses a backward pass that processes data saved from the Kalman filter forward pass. The equations for the backward pass involve the recursive"}, {"text": "The Rauch\u2013Tung\u2013Striebel (RTS) smoother is an efficient two-pass algorithm for fixed interval smoothing.The forward pass is the same as the regular Kalman filter algorithm. These filtered a-priori and a-posteriori state estimates"}, {"text": "Jeff Lundrigan of Next Generation wrote, \"Despite the problems \u2013 which it shares with practically every other Japanese RPG \u2013 Skies of Arcadia is an impressive, thoroughly delightful game that no one should pass up. \"Despite the generally positive reviews, many critics did criticize that it was sometimes difficult to explore the game's world due to the game's high rate of random encounter-based battles frequently disrupting progress."}, {"text": "For example, if an image contains a low amount of noise but with relatively high magnitude, then a median filter may be more appropriate."}, {"text": "Each track has a surrounding capture volume, approximately the shape of a football. The radius of the capture volume is approximately the distance the fastest detectable vehicle can travel between successive scans of that volume, which is determined by the receiver band pass filter in pulse-Doppler radar."}, {"text": "When performing multiple sample contrasts or tests, the Type I error rate tends to become inflated, raising concerns about multiple comparisons."}]}, {"question": "What is the curse of dimensionality in machine learning", "positive_ctxs": [{"text": "Machine Learning This phenomenon states that with a fixed number of training samples, the average (expected) predictive power of a classifier or regressor first increases as number of dimensions or features used is increased but beyond a certain dimensionality it starts deteriorating instead of improving steadily."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A typical rule of thumb is that there should be at least 5 training examples for each dimension in the representation. In machine learning and insofar as predictive performance is concerned, the curse of dimensionality is used interchangeably with the peaking phenomenon, which is also known as Hughes phenomenon. This phenomenon states that with a fixed number of training samples, the average (expected) predictive power of a classifier or regressor first increases as the number of dimensions or features used is increased but beyond a certain dimensionality it starts deteriorating instead of improving steadily.Nevertheless, in the context of a simple classifier (linear discriminant analysis in the multivariate Gaussian model under the assumption of a common known covariance matrix) Zollanvari et al."}, {"text": "The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The expression was coined by Richard E. Bellman when considering problems in dynamic programming.Dimensionally cursed phenomena occur in domains such as numerical analysis, sampling, combinatorics, machine learning, data mining and databases. The common theme of these problems is that when the dimensionality increases, the volume of the space increases so fast that the available data become sparse."}, {"text": "Another possibility is the randomized setting. For some problems we can break the curse of dimensionality by weakening the assurance; for others, we cannot. There is a large IBC literature on results in various settings; see Where to Learn More below."}, {"text": "Problems in machine learning often suffer from the curse of dimensionality \u2014 each sample may consist of a huge number of potential features (for instance, there can be 162,336 Haar features, as used by the Viola\u2013Jones object detection framework, in a 24\u00d724 pixel image window), and evaluating every feature can reduce not only the speed of classifier training and execution, but in fact reduce predictive power. Unlike neural networks and SVMs, the AdaBoost training process selects only those features known to improve the predictive power of the model, reducing dimensionality and potentially improving execution time as irrelevant features don't need to be computed."}, {"text": "noted that while the typical formalizations of the curse of dimensionality affect i.i.d. data, having data that is separated in each attribute becomes easier even in high dimensions, and argued that the signal-to-noise ratio matters: data becomes easier with each attribute that adds signal, and harder with attributes that only add noise (irrelevant error) to the data. In particular for unsupervised data analysis this effect is known as swamping."}, {"text": "\"The blessing of dimensionality and the curse of dimensionality are two sides of the same coin.\" For example, the typical property of essentially high-dimensional probability distributions in a high-dimensional space is: the squared distance of random points to a selected point is, with high probability, close to the average (or median) squared distance. This property significantly simplifies the expected geometry of data and indexing of high-dimensional data (blessing), but, at the same time, it makes the similarity search in high dimensions difficult and even useless (curse).Zimek et al."}, {"text": "Geometric anomalities in high dimension lead to the well-known curse of dimensionality. Nevertheless, proper utilization of concentration of measure phenomena can make computation easier. An important case of these blessing of dimensionality phenomena was highlighted by Donoho and Tanner: if a sample is essentially high-dimensional then each point can be separated from the rest of the sample by linear inequality, with high probability, even for exponentially large samples."}]}, {"question": "What is fractional scaling ubuntu", "positive_ctxs": [{"text": "Fractional scaling helps you to fully utilize your HiDPI monitors, high-resolution laptops by making your desktop not too small or not too big and keep things in balance. Although the resolution settings are there to help they sometimes are not feasible due to the operating system limitations."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "In PCA, the contribution of each component is ranked based on the magnitude of its corresponding eigenvalue, which is equivalent to the fractional residual variance (FRV) in analyzing empirical data. For NMF, its components are ranked based only on the empirical FRV curves. The residual fractional eigenvalue plots, that is,"}, {"text": "In PCA, the contribution of each component is ranked based on the magnitude of its corresponding eigenvalue, which is equivalent to the fractional residual variance (FRV) in analyzing empirical data. For NMF, its components are ranked based only on the empirical FRV curves. The residual fractional eigenvalue plots, that is,"}, {"text": "In PCA, the contribution of each component is ranked based on the magnitude of its corresponding eigenvalue, which is equivalent to the fractional residual variance (FRV) in analyzing empirical data. For NMF, its components are ranked based only on the empirical FRV curves. The residual fractional eigenvalue plots, that is,"}, {"text": "In PCA, the contribution of each component is ranked based on the magnitude of its corresponding eigenvalue, which is equivalent to the fractional residual variance (FRV) in analyzing empirical data. For NMF, its components are ranked based only on the empirical FRV curves. The residual fractional eigenvalue plots, that is,"}, {"text": ", but the fractional error of this estimate is unknown. The following unbounded family of nested sets of functions is a fractional-error info-gap model:"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}]}, {"question": "When small samples are used to estimate a population mean in cases where the population standard deviation is unknown", "positive_ctxs": [{"text": "When small samples are used to estimate a population mean, in cases where the population standard deviation is unknown: the t-distribution must be used to obtain the critical value. the resulting margin of error for a confidence interval estimate will tend to be fairly small."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "Calculating z using this formula requires the population mean and the population standard deviation, not the sample mean or sample deviation. But knowing the true mean and standard deviation of a population is often unrealistic except in cases such as standardized testing, where the entire population is measured."}, {"text": "Calculating z using this formula requires the population mean and the population standard deviation, not the sample mean or sample deviation. But knowing the true mean and standard deviation of a population is often unrealistic except in cases such as standardized testing, where the entire population is measured."}, {"text": "Calculating z using this formula requires the population mean and the population standard deviation, not the sample mean or sample deviation. But knowing the true mean and standard deviation of a population is often unrealistic except in cases such as standardized testing, where the entire population is measured."}, {"text": "Calculating z using this formula requires the population mean and the population standard deviation, not the sample mean or sample deviation. But knowing the true mean and standard deviation of a population is often unrealistic except in cases such as standardized testing, where the entire population is measured."}]}, {"question": "What is Bayes theorem statistics", "positive_ctxs": [{"text": "In probability theory and statistics, Bayes's theorem (alternatively Bayes's law or Bayes's rule), named after Reverend Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. Bayesian inference is fundamental to Bayesian statistics."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In the statistics and computer science literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but na\u00efve Bayes is not (necessarily) a Bayesian method."}, {"text": "In the statistics and computer science literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but na\u00efve Bayes is not (necessarily) a Bayesian method."}, {"text": "In the statistics and computer science literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but na\u00efve Bayes is not (necessarily) a Bayesian method."}, {"text": "In the statistics and computer science literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but na\u00efve Bayes is not (necessarily) a Bayesian method."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Bayes linear statistics is a subjectivist statistical methodology and framework. Traditional subjective Bayesian analysis is based upon fully specified probability distributions, which are very difficult to specify at the necessary level of detail. Bayes linear analysis attempts to solve this problem by developing theory and practise for using partially specified probability models."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}]}, {"question": "How do you find the mean and median of a histogram", "positive_ctxs": [{"text": "Here are some tips for connecting the shape of a histogram with the mean and median:If the histogram is skewed right, the mean is greater than the median. If the histogram is close to symmetric, then the mean and median are close to each other. If the histogram is skewed left, the mean is less than the median."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Furthermore, some types of signals (very often the case for images) use whole number representations: in these cases, histogram medians can be far more efficient because it is simple to update the histogram from window to window, and finding the median of a histogram is not particularly onerous."}, {"text": "Provided that the probability distribution of X is such that the above expectation exists, then m is a median of X if and only if m is a minimizer of the mean absolute error with respect to X. In particular, m is a sample median if and only if m minimizes the arithmetic mean of the absolute deviations.More generally, a median is defined as a minimum of"}, {"text": "Provided that the probability distribution of X is such that the above expectation exists, then m is a median of X if and only if m is a minimizer of the mean absolute error with respect to X. In particular, m is a sample median if and only if m minimizes the arithmetic mean of the absolute deviations.More generally, a median is defined as a minimum of"}, {"text": "The distributions of both the sample mean and the sample median were determined by Laplace. The distribution of the sample median from a population with a density function"}, {"text": "However, the notion of median does not lend itself to the theory of higher moments as well as the arithmetic mean does, and is much harder to compute by computer. As a result, the median was steadily supplanted as a notion of generic average by the arithmetic mean during the 20th century."}, {"text": "For ordinal variables the median can be calculated as a measure of central tendency and the range (and variations of it) as a measure of dispersion. For interval level variables, the arithmetic mean (average) and standard deviation are added to the toolbox and, for ratio level variables, we add the geometric mean and harmonic mean as measures of central tendency and the coefficient of variation as a measure of dispersion."}, {"text": "If a distribution is symmetric, then the median is the mean (so long as the latter exists). But, in general, the median and the mean can differ. For instance, with a random variable that has an exponential distribution, any particular sample of this random variable will have roughly a 63% chance of being less than the mean."}]}, {"question": "Can you take the derivative of a matrix", "positive_ctxs": [{"text": "2 Answers. If M is your matrix, then it represents a linear f:Rn\u2192Rn, thus when you do M(T) by row times column multiplication you obtain a vectorial expression for your f(T). Thus \u2202M\u2202T is just the derivative of the vector MT, which you do component-wise."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The matrix derivative is a convenient notation for keeping track of partial derivatives for doing calculations. The Fr\u00e9chet derivative is the standard way in the setting of functional analysis to take derivatives with respect to vectors. In the case that a matrix function of a matrix is Fr\u00e9chet differentiable, the two derivatives will agree up to translation of notations."}, {"text": "There are two types of derivatives with matrices that can be organized into a matrix of the same size. These are the derivative of a matrix by a scalar and the derivative of a scalar by a matrix. These can be useful in minimization problems found in many areas of applied mathematics and have adopted the names tangent matrix and gradient matrix respectively after their analogs for vectors."}, {"text": "More complicated examples include the derivative of a scalar function with respect to a matrix, known as the gradient matrix, which collects the derivative with respect to each matrix element in the corresponding position in the resulting matrix. In that case the scalar must be a function of each of the independent variables in the matrix. As another example, if we have an n-vector of dependent variables, or functions, of m independent variables we might consider the derivative of the dependent vector with respect to the independent vector."}, {"text": "While the first derivative test identifies points that might be extrema, this test does not distinguish a point that is a minimum from one that is a maximum or one that is neither. When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called the Hessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called the bordered Hessian in constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called 'second-order conditions' (see 'Second derivative test')."}, {"text": "If, as more frequently, the population is not evenly divisible (suppose you want to sample 8 houses out of 125, where 125/8=15.625), should you take every 15th house or every 16th house? If you take every 16th house, 8*16=128, so there is a risk that the last house chosen does not exist. On the other hand, if you take every 15th house, 8*15=120, so the last five houses will never be selected."}, {"text": "If, as more frequently, the population is not evenly divisible (suppose you want to sample 8 houses out of 125, where 125/8=15.625), should you take every 15th house or every 16th house? If you take every 16th house, 8*16=128, so there is a risk that the last house chosen does not exist. On the other hand, if you take every 15th house, 8*15=120, so the last five houses will never be selected."}, {"text": "A Wiener process is the scaling limit of random walk in dimension 1. This means that if you take a random walk with very small steps, you get an approximation to a Wiener process (and, less accurately, to Brownian motion). To be more precise, if the step size is \u03b5, one needs to take a walk of length L/\u03b52 to approximate a Wiener length of L. As the step size tends to 0 (and the number of steps increases proportionally), random walk converges to a Wiener process in an appropriate sense."}]}, {"question": "How do you find the slope of the regression line in R", "positive_ctxs": [{"text": "The Formula for the Slope For paired data (x,y) we denote the standard deviation of the x data by sx and the standard deviation of the y data by sy. The formula for the slope a of the regression line is: a = r(sy/sx)"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The comments should encourage the student to think about the effects of his or her actions on others\u2014-a strategy that in effect encourages the student to consider the ethical implications of the actions (Gibbs, 2003). Instead of simply saying, \"When you cut in line ahead of the other kids, that was not fair to them\", the teacher can try asking, \"How do you think the other kids feel when you cut in line ahead of them?\""}, {"text": "variables of the left and the right halves and estimating the slope of the line joining these two points. The line could then be adjusted to fit the majority of the points in the data set."}, {"text": "To find out if the mean salaries of the teachers in the North and South are statistically different from that of the teachers in the West (the comparison category), we have to find out if the slope coefficients of the regression result are statistically significant. For this, we need to consider the p values. The estimated slope coefficient for the North is not statistically significant as its p value is 23 percent; however, that of the South is statistically significant at the 5% level as its p value is only around 3.5 percent."}, {"text": "To find out if the mean salaries of the teachers in the North and South are statistically different from that of the teachers in the West (the comparison category), we have to find out if the slope coefficients of the regression result are statistically significant. For this, we need to consider the p values. The estimated slope coefficient for the North is not statistically significant as its p value is 23 percent; however, that of the South is statistically significant at the 5% level as its p value is only around 3.5 percent."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Shapiro-Wilk test: This is based on the fact that the line in the Q-Q plot has the slope of \u03c3. The test compares the least squares estimate of that slope with the value of the sample variance, and rejects the null hypothesis if these two quantities differ significantly.Tests based on the empirical distribution function:"}, {"text": "Shapiro-Wilk test: This is based on the fact that the line in the Q-Q plot has the slope of \u03c3. The test compares the least squares estimate of that slope with the value of the sample variance, and rejects the null hypothesis if these two quantities differ significantly.Tests based on the empirical distribution function:"}]}, {"question": "What is marginal effects in probit model", "positive_ctxs": [{"text": "Marginal probability effects are the partial effects of each explanatory variable on. the probability that the observed dependent variable Yi = 1, where in probit. models."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A probit model is a popular specification for a binary response model. As such it treats the same set of problems as does logistic regression using similar techniques. When viewed in the generalized linear model framework, the probit model employs a probit link function."}, {"text": "A probit model is a popular specification for a binary response model. As such it treats the same set of problems as does logistic regression using similar techniques. When viewed in the generalized linear model framework, the probit model employs a probit link function."}, {"text": "The reason for the use of the probit model is that a constant scaling of the input variable to a normal CDF (which can be absorbed through equivalent scaling of all of the parameters) yields a function that is practically identical to the logit function, but probit models are more tractable in some situations than logit models. (In a Bayesian setting in which normally distributed prior distributions are placed on the parameters, the relationship between the normal priors and the normal CDF link function means that a probit model can be computed using Gibbs sampling, while a logit model generally cannot.)"}, {"text": "The reason for the use of the probit model is that a constant scaling of the input variable to a normal CDF (which can be absorbed through equivalent scaling of all of the parameters) yields a function that is practically identical to the logit function, but probit models are more tractable in some situations than logit models. (In a Bayesian setting in which normally distributed prior distributions are placed on the parameters, the relationship between the normal priors and the normal CDF link function means that a probit model can be computed using Gibbs sampling, while a logit model generally cannot.)"}, {"text": "As shown in the graph on the right, the logit and probit functions are extremely similar when the probit function is scaled, so that its slope at y = 0 matches the slope of the logit. As a result, probit models are sometimes used in place of logit models because for certain applications (e.g., in Bayesian statistics) the implementation is easier."}, {"text": "As shown in the graph on the right, the logit and probit functions are extremely similar when the probit function is scaled, so that its slope at y = 0 matches the slope of the logit. As a result, probit models are sometimes used in place of logit models because for certain applications (e.g., in Bayesian statistics) the implementation is easier."}, {"text": "As shown in the graph on the right, the logit and probit functions are extremely similar when the probit function is scaled, so that its slope at y = 0 matches the slope of the logit. As a result, probit models are sometimes used in place of logit models because for certain applications (e.g., in Bayesian statistics) the implementation is easier."}]}, {"question": "What is FFT and its applications in DAA", "positive_ctxs": [{"text": "A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in the frequency domain and vice versa."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What emerges then is that info-gap theory is yet to explain in what way, if any, it actually attempts to deal with the severity of the uncertainty under consideration. Subsequent sections of this article will address this severity issue and its methodological and practical implications."}, {"text": "TPU has many applications including automotive instrument panels, caster wheels, power tools, sporting goods, medical devices, drive belts, footwear, inflatable rafts, and a variety of extruded film, sheet and profile applications. TPU is also a popular material found in outer cases of mobile electronic devices, such as mobile phones. It is also used to make keyboard protectors for laptops.TPU is well known for its applications in wire and cable jacketing, hose and tube, in adhesive and textile coating applications, as an impact modifier of other polymers."}, {"text": "TPU has many applications including automotive instrument panels, caster wheels, power tools, sporting goods, medical devices, drive belts, footwear, inflatable rafts, and a variety of extruded film, sheet and profile applications. TPU is also a popular material found in outer cases of mobile electronic devices, such as mobile phones. It is also used to make keyboard protectors for laptops.TPU is well known for its applications in wire and cable jacketing, hose and tube, in adhesive and textile coating applications, as an impact modifier of other polymers."}, {"text": "Ronald J. Brachman; What IS-A is and isn't. An Analysis of Taxonomic Links in Semantic Networks; IEEE Computer, 16 (10); October 1983"}, {"text": "Although the original applications were in the social sciences, PLS regression is today most widely used in chemometrics and related areas. It is also used in bioinformatics, sensometrics, neuroscience, and anthropology."}, {"text": "Although the original applications were in the social sciences, PLS regression is today most widely used in chemometrics and related areas. It is also used in bioinformatics, sensometrics, neuroscience, and anthropology."}]}, {"question": "What is variance and deviation", "positive_ctxs": [{"text": "The variance (symbolized by S2) and standard deviation (the square root of the variance, symbolized by S) are the most commonly used measures of spread. We know that variance is a measure of how spread out a data set is. It is calculated as the average squared deviation of each number from the mean of a data set."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The standard deviation and the expected absolute deviation can both be used as an indicator of the \"spread\" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization covariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly heavy-tailed distribution."}, {"text": "The standard deviation and the expected absolute deviation can both be used as an indicator of the \"spread\" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization covariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly heavy-tailed distribution."}, {"text": "The standard deviation and the expected absolute deviation can both be used as an indicator of the \"spread\" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization covariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly heavy-tailed distribution."}, {"text": "These are the critical values of the normal distribution with right tail probability. However, t-values are used when the sample size is below 30 and the standard deviation is unknown.When the variance is unknown, we must use a different estimator:"}, {"text": "These are the critical values of the normal distribution with right tail probability. However, t-values are used when the sample size is below 30 and the standard deviation is unknown.When the variance is unknown, we must use a different estimator:"}, {"text": "The 2-norm and \u221e-norm are strictly convex, and thus (by convex optimization) the minimizer is unique (if it exists), and exists for bounded distributions. Thus standard deviation about the mean is lower than standard deviation about any other point, and the maximum deviation about the midrange is lower than the maximum deviation about any other point."}]}, {"question": "What is classification learning", "positive_ctxs": [{"text": "In the terminology of machine learning, classification is considered an instance of supervised learning, i.e., learning where a training set of correctly identified observations is available. An algorithm that implements classification, especially in a concrete implementation, is known as a classifier."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "Another example of parameter adjustment is hierarchical classification (sometimes referred to as instance space decomposition), which splits a complete multi-class problem into a set of smaller classification problems. It serves for learning more accurate concepts due to simpler classification boundaries in subtasks and individual feature selection procedures for subtasks. When doing classification decomposition, the central choice is the order of combination of smaller classification steps, called the classification path."}, {"text": "Another example of parameter adjustment is hierarchical classification (sometimes referred to as instance space decomposition), which splits a complete multi-class problem into a set of smaller classification problems. It serves for learning more accurate concepts due to simpler classification boundaries in subtasks and individual feature selection procedures for subtasks. When doing classification decomposition, the central choice is the order of combination of smaller classification steps, called the classification path."}, {"text": "Another example of parameter adjustment is hierarchical classification (sometimes referred to as instance space decomposition), which splits a complete multi-class problem into a set of smaller classification problems. It serves for learning more accurate concepts due to simpler classification boundaries in subtasks and individual feature selection procedures for subtasks. When doing classification decomposition, the central choice is the order of combination of smaller classification steps, called the classification path."}]}, {"question": "What is meant by training set and test set", "positive_ctxs": [{"text": "training set\u2014a subset to train a model. test set\u2014a subset to test the trained model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "This is repeated for each of the k sets. Each outer training set is further sub-divided into l sets. One by one, a set is selected as inner test (validation) set and the l - 1 other sets are combined into the corresponding inner training set."}, {"text": "This is repeated for each of the k sets. Each outer training set is further sub-divided into l sets. One by one, a set is selected as inner test (validation) set and the l - 1 other sets are combined into the corresponding inner training set."}, {"text": "This is repeated for each of the k sets. Each outer training set is further sub-divided into l sets. One by one, a set is selected as inner test (validation) set and the l - 1 other sets are combined into the corresponding inner training set."}, {"text": "One by one, a set is selected as test set. Then, one by one, one of the remaining sets is used as a validation set and the other k - 2 sets are used as training sets until all possible combinations have been evaluated. Similar to the k*l-fold cross validation, the training set is used for model fitting and the validation set is used for model evaluation for each of the hyperparameter sets."}, {"text": "One by one, a set is selected as test set. Then, one by one, one of the remaining sets is used as a validation set and the other k - 2 sets are used as training sets until all possible combinations have been evaluated. Similar to the k*l-fold cross validation, the training set is used for model fitting and the validation set is used for model evaluation for each of the hyperparameter sets."}, {"text": "One by one, a set is selected as test set. Then, one by one, one of the remaining sets is used as a validation set and the other k - 2 sets are used as training sets until all possible combinations have been evaluated. Similar to the k*l-fold cross validation, the training set is used for model fitting and the validation set is used for model evaluation for each of the hyperparameter sets."}]}, {"question": "Which algorithms can be used for both classification and regression tasks", "positive_ctxs": [{"text": "Random Forest Algorithm The Random Forest ML Algorithm is a versatile supervised learning algorithm that's used for both classification and regression analysis tasks."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Datasets consisting of rows of observations and columns of attributes characterizing those observations. Typically used for regression analysis or classification but other types of algorithms can also be used. This section includes datasets that do not fit in the above categories."}, {"text": "Datasets consisting of rows of observations and columns of attributes characterizing those observations. Typically used for regression analysis or classification but other types of algorithms can also be used. This section includes datasets that do not fit in the above categories."}, {"text": "Datasets consisting of rows of observations and columns of attributes characterizing those observations. Typically used for regression analysis or classification but other types of algorithms can also be used. This section includes datasets that do not fit in the above categories."}, {"text": "Datasets consisting of rows of observations and columns of attributes characterizing those observations. Typically used for regression analysis or classification but other types of algorithms can also be used. This section includes datasets that do not fit in the above categories."}, {"text": "Datasets consisting of rows of observations and columns of attributes characterizing those observations. Typically used for regression analysis or classification but other types of algorithms can also be used. This section includes datasets that do not fit in the above categories."}, {"text": "Datasets consisting of rows of observations and columns of attributes characterizing those observations. Typically used for regression analysis or classification but other types of algorithms can also be used. This section includes datasets that do not fit in the above categories."}, {"text": "An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task.Types of supervised learning algorithms include active learning, classification and regression. Classification algorithms are used when the outputs are restricted to a limited set of values, and regression algorithms are used when the outputs may have any numerical value within a range. As an example, for a classification algorithm that filters emails, the input would be an incoming email, and the output would be the name of the folder in which to file the email."}]}, {"question": "What does a weak R squared value mean", "positive_ctxs": [{"text": "- if R-squared value 0.3 < r < 0.5 this value is generally considered a weak or low effect size, - if R-squared value 0.5 < r < 0.7 this value is generally considered a Moderate effect size, - if R-squared value r > 0.7 this value is generally considered strong effect size, Ref: Source: Moore, D. S., Notz, W."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In regression analysis, \"mean squared error\", often referred to as mean squared prediction error or \"out-of-sample mean squared error\", can also refer to the mean value of the squared deviations of the predictions from the true values, over an out-of-sample test space, generated by a model estimated over a particular sample space. This also is a known, computed quantity, and it varies by sample and by out-of-sample test space."}, {"text": "In regression analysis, \"mean squared error\", often referred to as mean squared prediction error or \"out-of-sample mean squared error\", can also refer to the mean value of the squared deviations of the predictions from the true values, over an out-of-sample test space, generated by a model estimated over a particular sample space. This also is a known, computed quantity, and it varies by sample and by out-of-sample test space."}, {"text": "In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors\u2014that is, the average squared difference between the estimated values and the actual value. MSE is a risk function, corresponding to the expected value of the squared error loss. The fact that MSE is almost always strictly positive (and not zero) is because of randomness or because the estimator does not account for information that could produce a more accurate estimate.The MSE is a measure of the quality of an estimator\u2014it is always non-negative, and values closer to zero are better."}, {"text": "In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors\u2014that is, the average squared difference between the estimated values and the actual value. MSE is a risk function, corresponding to the expected value of the squared error loss. The fact that MSE is almost always strictly positive (and not zero) is because of randomness or because the estimator does not account for information that could produce a more accurate estimate.The MSE is a measure of the quality of an estimator\u2014it is always non-negative, and values closer to zero are better."}, {"text": "The use of mean squared error without question has been criticized by the decision theorist James Berger. Mean squared error is the negative of the expected value of one specific utility function, the quadratic utility function, which may not be the appropriate utility function to use under a given set of circumstances. There are, however, some scenarios where mean squared error can serve as a good approximation to a loss function occurring naturally in an application.Like variance, mean squared error has the disadvantage of heavily weighting outliers."}, {"text": "The use of mean squared error without question has been criticized by the decision theorist James Berger. Mean squared error is the negative of the expected value of one specific utility function, the quadratic utility function, which may not be the appropriate utility function to use under a given set of circumstances. There are, however, some scenarios where mean squared error can serve as a good approximation to a loss function occurring naturally in an application.Like variance, mean squared error has the disadvantage of heavily weighting outliers."}, {"text": "The mean absolute error is one of a number of ways of comparing forecasts with their eventual outcomes. Well-established alternatives are the mean absolute scaled error (MASE) and the mean squared error. These all summarize performance in ways that disregard the direction of over- or under- prediction; a measure that does place emphasis on this is the mean signed difference."}]}, {"question": "Is t test robust to violations of normality", "positive_ctxs": [{"text": "the t-test is robust against non-normality; this test is in doubt only when there can be serious outliers (long-tailed distributions \u2013 note the finite variance assumption); or when sample sizes are small and distributions are far from normal. 10 / 20 Page 20 . . ."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In robust statistics, robust regression is a form of regression analysis designed to overcome some limitations of traditional parametric and non-parametric methods. Regression analysis seeks to find the relationship between one or more independent variables and a dependent variable. Certain widely used methods of regression, such as ordinary least squares, have favourable properties if their underlying assumptions are true, but can give misleading results if those assumptions are not true; thus ordinary least squares is said to be not robust to violations of its assumptions."}, {"text": "With extracellular measurement techniques an electrode (or array of several electrodes) is located in the extracellular space. Spikes, often from several spiking sources, depending on the size of the electrode and its proximity to the sources, can be identified with signal processing techniques. Extracellular measurement has several advantages: 1) Is easier to obtain experimentally; 2) Is robust and lasts for a longer time; 3) Can reflect the dominant effect, especially when conducted in an anatomical region with many similar cells."}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "Multivariate normality tests check a given set of data for similarity to the multivariate normal distribution. The null hypothesis is that the data set is similar to the normal distribution, therefore a sufficiently small p-value indicates non-normal data. Multivariate normality tests include the Cox\u2013Small test"}, {"text": "Other early test statistics include the ratio of the mean absolute deviation to the standard deviation and of the range to the standard deviation.More recent tests of normality include the energy test (Sz\u00e9kely and Rizzo) and the tests based on the empirical characteristic function (ECF) (e.g. Epps and Pulley, Henze\u2013Zirkler, BHEP test). The energy and the ECF tests are powerful tests that apply for testing univariate or multivariate normality and are statistically consistent against general alternatives."}, {"text": "Other early test statistics include the ratio of the mean absolute deviation to the standard deviation and of the range to the standard deviation.More recent tests of normality include the energy test (Sz\u00e9kely and Rizzo) and the tests based on the empirical characteristic function (ECF) (e.g. Epps and Pulley, Henze\u2013Zirkler, BHEP test). The energy and the ECF tests are powerful tests that apply for testing univariate or multivariate normality and are statistically consistent against general alternatives."}]}, {"question": "How do you handle missing data in regression analysis", "positive_ctxs": [{"text": "Therefore, a number of alternative ways of handling the missing data has been developed.Listwise or case deletion. Pairwise deletion. Mean substitution. Regression imputation. Last observation carried forward. Maximum likelihood. Expectation-Maximization. Multiple imputation.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Or, why is my friend depressed? The potential outcomes and regression analysis techniques handle such queries when data is collected using designed experiments. Data collected in observational studies require different techniques for causal inference (because, for example, of issues such as confounding)."}, {"text": "Fitted values from the regression model are then used to impute the missing values. The problem is that the imputed data do not have an error term included in their estimation, thus the estimates fit perfectly along the regression line without any residual variance. This causes relationships to be over identified and suggest greater precision in the imputed values than is warranted."}, {"text": "Fitted values from the regression model are then used to impute the missing values. The problem is that the imputed data do not have an error term included in their estimation, thus the estimates fit perfectly along the regression line without any residual variance. This causes relationships to be over identified and suggest greater precision in the imputed values than is warranted."}, {"text": "To impute missing data in statistics, NMF can take missing data while minimizing its cost function, rather than treating these missing data as zeros. This makes it a mathematically proven method for data imputation in statistics. By first proving that the missing data are ignored in the cost function, then proving that the impact from missing data can be as small as a second order effect, Ren et al."}, {"text": "To impute missing data in statistics, NMF can take missing data while minimizing its cost function, rather than treating these missing data as zeros. This makes it a mathematically proven method for data imputation in statistics. By first proving that the missing data are ignored in the cost function, then proving that the impact from missing data can be as small as a second order effect, Ren et al."}, {"text": "In statistics, imputation is the process of replacing missing data with substituted values. When substituting for a data point, it is known as \"unit imputation\"; when substituting for a component of a data point, it is known as \"item imputation\". There are three main problems that missing data causes: missing data can introduce a substantial amount of bias, make the handling and analysis of the data more arduous, and create reductions in efficiency."}]}, {"question": "What is an adversarial neural network", "positive_ctxs": [{"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "A feedforward neural network is an artificial neural network wherein connections between the nodes do not form a cycle. As such, it is different from its descendant: recurrent neural networks."}, {"text": "A feedforward neural network is an artificial neural network wherein connections between the nodes do not form a cycle. As such, it is different from its descendant: recurrent neural networks."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus a neural network is either a biological neural network, made up of real biological neurons, or an artificial neural network, for solving artificial intelligence (AI) problems. The connections of the biological neuron are modeled as weights."}]}, {"question": "How does Batch normalization address the problem of Internal Covariate Shift", "positive_ctxs": [{"text": "The normalisation ensures that the inputs have a mean of 0 and a standard deviation of 1, meaning that the input distribution to every neuron will be the same, thereby fixing the problem of internal covariate shift and providing regularisation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Batch normalization (also known as batch norm) is a method used to make artificial neural networks faster and more stable through normalization of the input layer by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015.While the effect of batch normalization is evident, the reasons behind its effectiveness remain under discussion. It was believed that it can mitigate the problem of internal covariate shift, where parameter initialization and changes in the distribution of the inputs of each layer affect the learning rate of the network."}, {"text": "Ioffe, Sergey; Szegedy, Christian (2015). \"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift\", ICML'15: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, July 2015 Pages 448\u2013456"}, {"text": "Batch normalization was initially proposed to mitigate internal covariate shift. During the training stage of networks, as the parameters of the preceding layers change, the distribution of inputs to the current layer changes accordingly, such that the current layer needs to constantly readjust to new distributions. This problem is especially severe for deep networks, because small changes in shallower hidden layers will be amplified as they propagate within the network, resulting in significant shift in deeper hidden layers."}, {"text": "The frequentist interpretation does resolve difficulties with the classical interpretation, such as any problem where the natural symmetry of outcomes is not known. It does not address other issues, such as the dutch book."}, {"text": "A group of 20 students spends between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability of the student passing the exam?"}, {"text": "A group of 20 students spends between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability of the student passing the exam?"}, {"text": "A group of 20 students spends between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability of the student passing the exam?"}]}, {"question": "What is constraints satisfaction problem in AI", "positive_ctxs": [{"text": "2.1 The Early Days. Constraint satisfaction, in its basic form, involves finding a value for each one of a set of problem variables where constraints specify that some subsets of values cannot be used together."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The constraint composite graph is a node-weighted undirected graph associated with a given combinatorial optimization problem posed as a weighted constraint satisfaction problem. Developed and introduced by Satish Kumar Thittamaranahalli (T. K. Satish Kumar), the idea of the constraint composite graph is a big step towards unifying different approaches for exploiting \"structure\" in weighted constraint satisfaction problems.A weighted constraint satisfaction problem (WCSP) is a generalization of a constraint satisfaction problem in which the constraints are no longer \"hard,\" but are extended to specify non-negative costs associated with the tuples. The goal is then to find an assignment of values to all the variables from their respective domains so that the total cost is minimized."}, {"text": "Solving a constraint satisfaction problem on a finite domain is an NP complete problem with respect to the domain size. Research has shown a number of tractable subcases, some limiting the allowed constraint relations, some requiring the scopes of constraints to form a tree, possibly in a reformulated version of the problem. Research has also established relationship of the constraint satisfaction problem with problems in other areas such as finite model theory."}, {"text": "Constraint satisfaction problems (CSPs) are mathematical questions defined as a set of objects whose state must satisfy a number of constraints or limitations. CSPs represent the entities in a problem as a homogeneous collection of finite constraints over variables, which is solved by constraint satisfaction methods. CSPs are the subject of intense research in both artificial intelligence and operations research, since the regularity in their formulation provides a common basis to analyze and solve problems of many seemingly unrelated families."}, {"text": "While weighted constraint satisfaction problems are NP-hard to solve in general, several subclasses can be solved in polynomial time when their weighted constraints exhibit specific kinds of numerical structure. Tractable subclasses can also be identified by analyzing the way constraints are placed over the variables. Specifically, a weighted constraint satisfaction problem can be solved in time exponential only in the treewidth of its variable-interaction graph (constraint network)."}, {"text": "The techniques used in constraint satisfaction depend on the kind of constraints being considered. Often used are constraints on a finite domain, to the point that constraint satisfaction problems are typically identified with problems based on constraints on a finite domain. Such problems are usually solved via search, in particular a form of backtracking or local search."}, {"text": "For each value, the consistency of the partial assignment with the constraints is checked; in case of consistency, a recursive call is performed. When all values have been tried, the algorithm backtracks. In this basic backtracking algorithm, consistency is defined as the satisfaction of all constraints whose variables are all assigned."}, {"text": "A constraint satisfaction problem on such domain contains a set of variables whose values can only be taken from the domain, and a set of constraints, each constraint specifying the allowed values for a group of variables. A solution to this problem is an evaluation of the variables that satisfies all constraints. In other words, a solution is a way for assigning a value to each variable in such a way that all constraints are satisfied by these values."}]}, {"question": "What is an intuitive explanation of Cohens kappa statistic", "positive_ctxs": [{"text": "Cohen came up with a mechanism to calculate a value which represents the level of agreement between judges negating the agreement by chance. You can see that balls which are agreed on by chance are removed both from agreed and total number of balls. And that is the whole intuition of Kappa value aka Kappa coefficient."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Cohen's kappa coefficient (\u03ba) is a statistic that is used to measure inter-rater reliability (and also Intra-rater reliability) for qualitative (categorical) items. It is generally thought to be a more robust measure than simple percent agreement calculation, as \u03ba takes into account the possibility of the agreement occurring by chance. There is controversy surrounding Cohen's kappa due to the difficulty in interpreting indices of agreement."}, {"text": "Cohen's kappa coefficient (\u03ba) is a statistic that is used to measure inter-rater reliability (and also Intra-rater reliability) for qualitative (categorical) items. It is generally thought to be a more robust measure than simple percent agreement calculation, as \u03ba takes into account the possibility of the agreement occurring by chance. There is controversy surrounding Cohen's kappa due to the difficulty in interpreting indices of agreement."}, {"text": "Cohen's kappa coefficient (\u03ba) is a statistic that is used to measure inter-rater reliability (and also Intra-rater reliability) for qualitative (categorical) items. It is generally thought to be a more robust measure than simple percent agreement calculation, as \u03ba takes into account the possibility of the agreement occurring by chance. There is controversy surrounding Cohen's kappa due to the difficulty in interpreting indices of agreement."}, {"text": "Several studies have highlighted the consequences of serial correlation and highlighted the small-cluster problem.In the framework of the Moulton factor, an intuitive explanation of the small cluster problem can be derived from the formula for the Moulton factor. Assume for simplicity that the number of observation per cluster is fixed at n. Below,"}, {"text": "Several studies have highlighted the consequences of serial correlation and highlighted the small-cluster problem.In the framework of the Moulton factor, an intuitive explanation of the small cluster problem can be derived from the formula for the Moulton factor. Assume for simplicity that the number of observation per cluster is fixed at n. Below,"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Abductive validation is the process of validating a given hypothesis through abductive reasoning. This can also be called reasoning through successive approximation. Under this principle, an explanation is valid if it is the best possible explanation of a set of known data."}]}, {"question": "What is FDR correction", "positive_ctxs": [{"text": "The false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. Thus, FDR-controlling procedures have greater power, at the cost of increased numbers of Type I errors."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Two corrections are commonly used: the Greenhouse\u2013Geisser correction and the Huynh\u2013Feldt correction. The Greenhouse\u2013Geisser correction is more conservative, but addresses a common issue of increasing variability over time in a repeated-measures design. The Huynh\u2013Feldt correction is less conservative, but does not address issues of increasing variability."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Using a multiplicity procedure that controls the FDR criterion is adaptive and scalable. Meaning that controlling the FDR can be very permissive (if the data justify it), or conservative (acting close to control of FWER for sparse problem) - all depending on the number of hypotheses tested and the level of significance.The FDR criterion adapts so that the same number of false discoveries (V) will have different implications, depending on the total number of discoveries (R). This contrasts with the family wise error rate criterion."}, {"text": "is inserted into the BH procedure, it is no longer guaranteed to achieve FDR control at the desired level. Adjustments may be needed in the estimator and several modifications have been proposed.Note that the mean"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}]}, {"question": "What is the maximum value of the variance of binomial distribution", "positive_ctxs": [{"text": "Mean and Variance of a Binomial Distribution The variance of a Binomial Variable is always less than its mean. \u2234 npq sensors -> agent function -> actuators -> environment."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}, {"text": "The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks."}, {"text": "The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks."}, {"text": "The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks."}, {"text": "Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes \"superintelligent\", then it could become difficult or impossible for humans to control."}, {"text": "In computer science, a rule-based system is used to store and manipulate knowledge to interpret information in a useful way. It is often used in artificial intelligence applications and research."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}]}, {"question": "Why is squared loss bad for classification", "positive_ctxs": [{"text": "There are two reasons why Mean Squared Error(MSE) is a bad choice for binary classification problems: If we use maximum likelihood estimation(MLE), assuming that the data is from a normal distribution(a wrong assumption, by the way), we get the MSE as a Cost function for optimizing our model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, the Huber loss is a loss function used in robust regression, that is less sensitive to outliers in data than the squared error loss. A variant for classification is also sometimes used."}, {"text": "is used for measuring the discrepancy between the target output t and the computed output y. For regression analysis problems the squared error can be used as a loss function, for classification the categorical crossentropy can be used."}, {"text": "is used for measuring the discrepancy between the target output t and the computed output y. For regression analysis problems the squared error can be used as a loss function, for classification the categorical crossentropy can be used."}, {"text": "is used for measuring the discrepancy between the target output t and the computed output y. For regression analysis problems the squared error can be used as a loss function, for classification the categorical crossentropy can be used."}, {"text": "is used for measuring the discrepancy between the target output t and the computed output y. For regression analysis problems the squared error can be used as a loss function, for classification the categorical crossentropy can be used."}, {"text": "is used for measuring the discrepancy between the target output t and the computed output y. For regression analysis problems the squared error can be used as a loss function, for classification the categorical crossentropy can be used."}, {"text": "The logistic loss is sometimes called cross-entropy loss. It is also known as log loss (In this case, the binary label is often denoted by {-1,+1}).Remark: The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared error loss for Linear regression."}]}, {"question": "What are the benefits of sentiment analysis", "positive_ctxs": [{"text": "Sentiment analysis also means you'll be able to detect changes in the overall opinion towards your brand. Because it provides insight into the way your customers are feeling when they approach you, you can monitor trends and see if overall opinion towards your company drops or rises."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This allows movement to a more sophisticated understanding of sentiment, because it is now possible to adjust the sentiment value of a concept relative to modifications that may surround it. Words, for example, that intensify, relax or negate the sentiment expressed by the concept can affect its score. Alternatively, texts can be given a positive and negative sentiment strength score if the goal is to determine the sentiment in a text rather than the overall polarity and strength of the text.There are various other types of sentiment analysis like- Aspect Based sentiment analysis, Grading sentiment analysis (positive,negative,neutral), Multilingual sentiment analysis and detection of emotions."}, {"text": "Clearly, the high evaluated item should be recommended to the user. Based on these two motivations, a combination ranking score of similarity and sentiment rating can be constructed for each candidate item.Except for the difficulty of the sentiment analysis itself, applying sentiment analysis on reviews or feedback also faces the challenge of spam and biased reviews. One direction of work is focused on evaluating the helpfulness of each review."}, {"text": "In general, the utility for practical commercial tasks of sentiment analysis as it is defined in academic research has been called into question, mostly since the simple one-dimensional model of sentiment from negative to positive yields rather little actionable information for a client worrying about the effect of public discourse on e.g. brand or corporate reputation.To better fit market needs, evaluation of sentiment analysis has moved to more task-based measures, formulated together with representatives from PR agencies and market research professionals. the RepLab evaluation data set is less on the content of the text under consideration and more on the effect of the text in question on brand reputation.Because evaluation of sentiment analysis is becoming more and more task based, each implementation needs a separate training model to get a more accurate representation of sentiment for a given data set."}, {"text": "Features extracted from the user-generated reviews are improved meta-data of items, because as they also reflect aspects of the item like meta-data, extracted features are widely concerned by the users. Sentiments extracted from the reviews can be seen as users' rating scores on the corresponding features. Popular approaches of opinion-based recommender system utilize various techniques including text mining, information retrieval, sentiment analysis (see also Multimodal sentiment analysis) and deep learning."}, {"text": "Even though short text strings might be a problem, sentiment analysis within microblogging has shown that Twitter can be seen as a valid online indicator of political sentiment. Tweets' political sentiment demonstrates close correspondence to parties' and politicians' political positions, indicating that the content of Twitter messages plausibly reflects the offline political landscape. Furthermore, sentiment analysis on Twitter has also been shown to capture the public mood behind human reproduction cycles on a planetary scale, as well as other problems of public-health relevance such as adverse drug reactions."}, {"text": "Oftentimes, the software seeks to extract concepts or metaphors from the medium, (such as height or sentiment) and apply the extracted information to generate songs using the ways music theory typically represents those concepts. Another example is the translation of text into music, which can approach composition by extracting sentiment (positive or negative) from the text using machine learning methods like sentiment analysis and represents that sentiment in terms of chord quality such as minor (sad) or major (happy) chords in the musical output generated."}, {"text": "Lamba & Madhusudhan introduce a nascent way to cater the information needs of today\u2019s library users by repackaging the results from sentiment analysis of social media platforms like Twitter and provide it as a consolidated time-based service in different formats. Further, they propose a new way of conducting marketing in libraries using social media mining and sentiment analysis."}]}, {"question": "What is the difference between probability and likelihood", "positive_ctxs": [{"text": "The main difference between probability and likelihood is that the former is normalized. Probability refers to the occurrence of future events, while a likelihood refers to past events with known outcomes. Probability is used when describing a function of the outcome given a fixed parameter value."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "\u039bp is the absolute difference between pre- and posttest probability of conditions (such as diseases) that the test is expected to achieve. A major factor for such an absolute difference is the power of the test itself, such as can be described in terms of, for example, sensitivity and specificity or likelihood ratio. Another factor is the pre-test probability, with a lower pre-test probability resulting in a lower absolute difference, with the consequence that even very powerful tests achieve a low absolute difference for very unlikely conditions in an individual (such as rare diseases in the absenceower can make a great difference for highly suspected conditions."}, {"text": "Given a model, likelihood intervals can be compared to confidence intervals. If \u03b8 is a single real parameter, then under certain conditions, a 14.65% likelihood interval (about 1:7 likelihood) for \u03b8 will be the same as a 95% confidence interval (19/20 coverage probability). In a slightly different formulation suited to the use of log-likelihoods (see Wilks' theorem), the test statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately a chi-squared distribution with degrees-of-freedom (df) equal to the difference in df's between the two models (therefore, the e\u22122 likelihood interval is the same as the 0.954 confidence interval; assuming difference in df's to be 1)."}, {"text": "Given a model, likelihood intervals can be compared to confidence intervals. If \u03b8 is a single real parameter, then under certain conditions, a 14.65% likelihood interval (about 1:7 likelihood) for \u03b8 will be the same as a 95% confidence interval (19/20 coverage probability). In a slightly different formulation suited to the use of log-likelihoods (see Wilks' theorem), the test statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately a chi-squared distribution with degrees-of-freedom (df) equal to the difference in df's between the two models (therefore, the e\u22122 likelihood interval is the same as the 0.954 confidence interval; assuming difference in df's to be 1)."}, {"text": "The power of the test is the probability that the test will find a statistically significant difference between men and women, as a function of the size of the true difference between those two populations."}, {"text": "In psychophysical terms, the size difference between A and C is above the just noticeable difference ('jnd') while the size differences between A and B and B and C are below the jnd."}, {"text": "In the Rasch model, the probability of a specified response (e.g. right/wrong answer) is modeled as a function of person and item parameters. Specifically, in the original Rasch model, the probability of a correct response is modeled as a logistic function of the difference between the person and item parameter."}, {"text": "Another estimator which is asymptotically normal and efficient is the maximum likelihood estimator (MLE). The relations between the maximum likelihood and Bayes estimators can be shown in the following simple example."}]}, {"question": "How do you explain a decision tree", "positive_ctxs": [{"text": "A decision tree is simply a set of cascading questions. When you get a data point (i.e. set of features and values), you use each attribute (i.e. a value of a given feature of the data point) to answer a question. The answer to each question decides the next question."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}]}, {"question": "Does logistic regression data need to be normally distributed", "positive_ctxs": [{"text": "Logistic regression is quite different than linear regression in that it does not make several of the key assumptions that linear and general linear models (as well as other ordinary least squares algorithm based models) hold so close: (1) logistic regression does not require a linear relationship between the dependent"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The fifth issue, concerning the homogeneity of different treatment regression slopes is particularly important in evaluating the appropriateness of ANCOVA model. Also note that we only need the error terms to be normally distributed. In fact both the independent variable and the concomitant variables will not be normally distributed in most cases."}, {"text": "The fifth issue, concerning the homogeneity of different treatment regression slopes is particularly important in evaluating the appropriateness of ANCOVA model. Also note that we only need the error terms to be normally distributed. In fact both the independent variable and the concomitant variables will not be normally distributed in most cases."}, {"text": "In statistics, multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems, i.e. with more than two possible discrete outcomes. That is, it is a model that is used to predict the probabilities of the different possible outcomes of a categorically distributed dependent variable, given a set of independent variables (which may be real-valued, binary-valued, categorical-valued, etc."}, {"text": "In statistics, multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems, i.e. with more than two possible discrete outcomes. That is, it is a model that is used to predict the probabilities of the different possible outcomes of a categorically distributed dependent variable, given a set of independent variables (which may be real-valued, binary-valued, categorical-valued, etc."}, {"text": "If an improper prior proportional to \u03c3\u22122 is placed over the variance, the t-distribution also arises. This is the case regardless of whether the mean of the normally distributed variable is known, is unknown distributed according to a conjugate normally distributed prior, or is unknown distributed according to an improper constant prior."}, {"text": "The distribution of the residuals largely depends on the type and distribution of the outcome variable; different types of outcome variables lead to the variety of models within the GLiM family. Commonly used models in the GLiM family include binary logistic regression for binary or dichotomous outcomes, Poisson regression for count outcomes, and linear regression for continuous, normally distributed outcomes. This means that GLiM may be spoken of as a general family of statistical models or as specific models for specific outcome types."}, {"text": "The distribution of the residuals largely depends on the type and distribution of the outcome variable; different types of outcome variables lead to the variety of models within the GLiM family. Commonly used models in the GLiM family include binary logistic regression for binary or dichotomous outcomes, Poisson regression for count outcomes, and linear regression for continuous, normally distributed outcomes. This means that GLiM may be spoken of as a general family of statistical models or as specific models for specific outcome types."}]}, {"question": "What is the relationship between F statistic and T statistic", "positive_ctxs": [{"text": "It is often pointed out that when ANOVA is applied to just two groups, and when therefore one can calculate both a t-statistic and an F-statistic from the same data, it happens that the two are related by the simple formula: t2 = F."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "T is easier to calculate by hand than W and the test is equivalent to the two-sided test described above; however, the distribution of the statistic under"}, {"text": "The distribution of this statistic is unknown. It is related to a statistic proposed earlier by Pearson \u2013 the difference between the kurtosis and the square of the skewness (vide infra)."}, {"text": "This involves an auxiliary regression, wherein the residuals obtained from estimating the model of interest are regressed on (a) the original regressors and (b) k lags of the residuals, where 'k' is the order of the test. The simplest version of the test statistic from this auxiliary regression is TR2, where T is the sample size and R2 is the coefficient of determination. Under the null hypothesis of no autocorrelation, this statistic is"}, {"text": "This involves an auxiliary regression, wherein the residuals obtained from estimating the model of interest are regressed on (a) the original regressors and (b) k lags of the residuals, where 'k' is the order of the test. The simplest version of the test statistic from this auxiliary regression is TR2, where T is the sample size and R2 is the coefficient of determination. Under the null hypothesis of no autocorrelation, this statistic is"}, {"text": "Alternative Univariate test\u2014These tests account for violations to the assumption of sphericity, and can be used when the within-subjects factor exceeds 2 levels. The F statistic is the same as in the Standard Univariate ANOVA F test, but is associated with a more accurate p-value. This correction is done by adjusting the degrees of freedom downward for determining the critical F value."}, {"text": "If the test statistic T is reported, an equivalent way to compute the rank correlation is with the difference in proportion between the two rank sums, which is the Kerby (2014) simple difference formula. To continue with the current example, the sample size is 9, so the total rank sum is 45. T is the smaller of the two rank sums, so T is 3 + 4 + 5 + 6 = 18."}, {"text": "The formal test is based on a chi-squared statistic. When the log-rank statistic is large, it is evidence for a difference in the survival times between the groups. The log-rank statistic approximately has a chi-squared distribution with one degree of freedom, and the p-value is calculated using the chi-squared distribution."}]}, {"question": "What is meant by classification in statistics", "positive_ctxs": [{"text": "A classification is an ordered set of related categories used to group data according to its similarities. It consists of codes and descriptors and allows survey responses to be put into meaningful categories in order to produce useful data. A classification is a useful tool for anyone developing statistical surveys."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}]}, {"question": "What is countable set in analysis", "positive_ctxs": [{"text": "A set is countable if: (1) it is finite, or (2) it has the same cardinality (size) as the set of natural numbers (i.e., denumerable). Equivalently, a set is countable if it has the same cardinality as some subset of the set of natural numbers. Otherwise, it is uncountable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This set is the union of the length-1 sequences, the length-2 sequences, the length-3 sequences, each of which is a countable set (finite Cartesian product). So we are talking about a countable union of countable sets, which is countable by the previous theorem."}, {"text": "A set is countable if: (1) it is finite, or (2) it has the same cardinality (size) as the set of natural numbers (i.e., denumerable). Equivalently, a set is countable if it has the same cardinality as some subset of the set of natural numbers."}, {"text": "In mathematics, a countable set is a set with the same cardinality (number of elements) as some subset of the set of natural numbers. A countable set is either a finite set or a countably infinite set. Whether finite or infinite, the elements of a countable set can always be counted one at a time and\u2014although the counting may never finish\u2014every element of the set is associated with a unique natural number."}, {"text": "is considered for rational numbers y only. (Any other dense countable set may be used equally well.) Thus, only a countable set of equivalence classes is used; all choices of functions within these classes are mutually equivalent, and the corresponding function of rational y is well-defined (for almost every x)."}, {"text": "If the function g : S \u2192 T is surjective and S is countable then T is countable.Cantor's theorem asserts that if A is a set and P(A) is its power set, i.e. the set of all subsets of A, then there is no surjective function from A to P(A). A proof is given in the article Cantor's theorem."}, {"text": "f \u00d7 g : N \u00d7 N \u2192 A \u00d7 Bis a surjection from the countable set N \u00d7 N to the set A \u00d7 B and the Corollary implies A \u00d7 B is countable. This result generalizes to the Cartesian product of any finite collection of countable sets and the proof follows by induction on the number of sets in the collection."}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}]}, {"question": "What is NLP problem", "positive_ctxs": [{"text": "Natural language processing (NLP) is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Given the size of many NLPs arising from a direct method, it may appear somewhat counter-intuitive that solving the nonlinear optimization problem is easier than solving the boundary-value problem. It is, however, the fact that the NLP is easier to solve than the boundary-value problem. The reason for the relative ease of computation, particularly of a direct collocation method, is that the NLP is sparse and many well-known software programs exist (e.g., SNOPT) to solve large sparse NLPs."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "The mathematical statement of this problem is as follows: pick a random permutation on n elements and k values from the range 1 to n, also at random, call these marks. What is the probability that there is at least one mark on every cycle of the permutation? The claim is this probability is k/n."}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}]}, {"question": "What is the difference between z test and t test", "positive_ctxs": [{"text": "Z-tests are statistical calculations that can be used to compare population means to a sample's. T-tests are calculations used to test a hypothesis, but they are most useful when we need to determine if there is a statistically significant difference between two independent sample groups."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The power of the test is the probability that the test will find a statistically significant difference between men and women, as a function of the size of the true difference between those two populations."}, {"text": "A chi-squared test, also written as \u03c72 test, is a statistical hypothesis test that is valid to perform when the test statistic is chi-squared distributed under the null hypothesis, specifically Pearson's chi-squared test and variants thereof. Pearson's chi-squared test is used to determine whether there is a statistically significant difference between the expected frequencies and the observed frequencies in one or more categories of a contingency table."}, {"text": "A chi-squared test, also written as \u03c72 test, is a statistical hypothesis test that is valid to perform when the test statistic is chi-squared distributed under the null hypothesis, specifically Pearson's chi-squared test and variants thereof. Pearson's chi-squared test is used to determine whether there is a statistically significant difference between the expected frequencies and the observed frequencies in one or more categories of a contingency table."}, {"text": "The test proceeds as follows. First, the difference in means between the two samples is calculated: this is the observed value of the test statistic,"}, {"text": "The test proceeds as follows. First, the difference in means between the two samples is calculated: this is the observed value of the test statistic,"}, {"text": "The test proceeds as follows. First, the difference in means between the two samples is calculated: this is the observed value of the test statistic,"}, {"text": "The test proceeds as follows. First, the difference in means between the two samples is calculated: this is the observed value of the test statistic,"}]}, {"question": "What is difference between discrete and continuous variable", "positive_ctxs": [{"text": "If a variable can take on any value between two specified values, it is called a continuous variable; otherwise, it is called a discrete variable. Some examples will clarify the difference between discrete and continuous variables. The number of heads could be any integer value between 0 and plus infinity."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The raison d'\u00eatre of the measure-theoretic treatment of probability is that it unifies the discrete and the continuous cases, and makes the difference a question of which measure is used. Furthermore, it covers distributions that are neither discrete nor continuous nor mixtures of the two."}, {"text": "Another difference is that the threshold is constant. The model SRM0 can be formulated in discrete or continuous time. For example, in continuous time, the single-neuron equation is"}, {"text": "Because the distribution of a continuous latent variable can be approximated by a discrete distribution, the distinction between continuous and discrete variables turns out not to be fundamental at all. Therefore, there may be a psychometrical latent variable, but not a psychological psychometric variable."}, {"text": "Because the distribution of a continuous latent variable can be approximated by a discrete distribution, the distinction between continuous and discrete variables turns out not to be fundamental at all. Therefore, there may be a psychometrical latent variable, but not a psychological psychometric variable."}, {"text": "Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part."}, {"text": "Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part."}, {"text": "Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part."}]}, {"question": "How artificial intelligence is related to robotics", "positive_ctxs": [{"text": "Artificial intelligence (AI) is a branch of computer science. Most AI programs are not used to control robots. Even when AI is used to control robots, the AI algorithms are only part of the larger robotic system, which also includes sensors, actuators, and non-AI programming."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks."}, {"text": "The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks."}, {"text": "The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks."}, {"text": "Morphogenetic robotics is related to, but differs from, epigenetic robotics. The main difference between morphogenetic robotics and epigenetic robotics is that the former focuses on self-organization, self-reconfiguration, self-assembly and self-adaptive control of robots using genetic and cellular mechanisms inspired from biological early morphogenesis (activity-independent development), during which the body and controller of the organisms are developed simultaneously, whereas the latter emphasizes the development of robots' cognitive capabilities, such as language, emotion and social skills, through experience during the lifetime (activity-dependent development). Morphogenetic robotics is closely connected to developmental biology and systems biology, whilst epigenetic robotics is related to developmental cognitive neuroscience emerged from cognitive science, developmental psychology and neuroscience."}, {"text": "Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics. The aim of the theory of artificial consciousness is to \"Define that which would have to be synthesized were consciousness to be found in an engineered artifact\" (Aleksander 1995)."}]}, {"question": "How does a bounding box work", "positive_ctxs": [{"text": "The model works by first splitting the input image into a grid of cells, where each cell is responsible for predicting a bounding box if the center of a bounding box falls within it. Each grid cell predicts a bounding box involving the x, y coordinate and the width and height and the confidence."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The arbitrarily oriented minimum bounding box is the minimum bounding box, calculated subject to no constraints as to the orientation of the result. Minimum bounding box algorithms based on the rotating calipers method can be used to find the minimum-area or minimum-perimeter bounding box of a two-dimensional convex polygon in linear time, and of a two-dimensional point set in the time it takes to construct its convex hull followed by a linear-time computation. A three-dimensional rotating calipers algorithm can find the minimum-volume arbitrarily-oriented bounding box of a three-dimensional point set in cubic time."}, {"text": "In geometry, the minimum or smallest bounding or enclosing box for a point set (S) in N dimensions is the box with the smallest measure (area, volume, or hypervolume in higher dimensions) within which all the points lie. When other kinds of measure are used, the minimum box is usually called accordingly, e.g., \"minimum-perimeter bounding box\"."}, {"text": "The axis-aligned minimum bounding box (or AABB) for a given point set is its minimum bounding box subject to the constraint that the edges of the box are parallel to the (Cartesian) coordinate axes. It is the Cartesian product of N intervals each of which is defined by the minimal and maximal value of the corresponding coordinate for the points in S."}, {"text": "How much does the ball cost?\" many subjects incorrectly answer $0.10. An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do."}, {"text": "ImageNet crowdsources its annotation process. Image-level annotations indicate the presence or absence of an object class in an image, such as \"there are tigers in this image\" or \"there are no tigers in this image\". Object-level annotations provide a bounding box around the (visible part of the) indicated object."}, {"text": "ImageNet crowdsources its annotation process. Image-level annotations indicate the presence or absence of an object class in an image, such as \"there are tigers in this image\" or \"there are no tigers in this image\". Object-level annotations provide a bounding box around the (visible part of the) indicated object."}, {"text": "\"a two-way infinite sequence of spaces or boxes... The problem solver or worker is to move and work in this symbol space, being capable of being in, and operating in but one box at a time.... a box is to admit of but two possible conditions, i.e., being empty or unmarked, and having a single mark in it, say a vertical stroke. \"One box is to be singled out and called the starting point."}]}, {"question": "What are autoregressive models in machine learning", "positive_ctxs": [{"text": "Autoregression is a time series model that uses observations from previous time steps as input to a regression equation to predict the value at the next time step. It is a very simple idea that can result in accurate forecasts on a range of time series problems."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Combinations of these ideas produce autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) models. The autoregressive fractionally integrated moving average (ARFIMA) model generalizes the former three. Extensions of these classes to deal with vector-valued data are available under the heading of multivariate time-series models and sometimes the preceding acronyms are extended by including an initial \"V\" for \"vector\", as in VAR for vector autoregression."}, {"text": "Combinations of these ideas produce autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) models. The autoregressive fractionally integrated moving average (ARFIMA) model generalizes the former three. Extensions of these classes to deal with vector-valued data are available under the heading of multivariate time-series models and sometimes the preceding acronyms are extended by including an initial \"V\" for \"vector\", as in VAR for vector autoregression."}, {"text": "Combinations of these ideas produce autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) models. The autoregressive fractionally integrated moving average (ARFIMA) model generalizes the former three. Extensions of these classes to deal with vector-valued data are available under the heading of multivariate time-series models and sometimes the preceding acronyms are extended by including an initial \"V\" for \"vector\", as in VAR for vector autoregression."}, {"text": "Fitting the MA estimates is more complicated than it is in autoregressive models (AR models), because the lagged error terms are not observable. This means that iterative non-linear fitting procedures need to be used in place of linear least squares."}, {"text": "Fitting the MA estimates is more complicated than it is in autoregressive models (AR models), because the lagged error terms are not observable. This means that iterative non-linear fitting procedures need to be used in place of linear least squares."}, {"text": "Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory. GPT-3's full version has a capacity of 175 billion machine learning parameters."}, {"text": "The high degree of automation in AutoML allows non-experts to make use of machine learning models and techniques without requiring becoming an expert in the field first."}]}, {"question": "What are the conditions in which Gradient descent is applied", "positive_ctxs": [{"text": "Gradient descent is best used when the parameters cannot be calculated analytically (e.g. using linear algebra) and must be searched for by an optimization algorithm."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Gradient descent is used in machine-learning by defining a loss function that reflects the error of the learner on the training set and then minimizing that function."}, {"text": "Gradient descent works in spaces of any number of dimensions, even in infinite-dimensional ones. In the latter case, the search space is typically a function space, and one calculates the Fr\u00e9chet derivative of the functional to be minimized to determine the descent direction.That gradient descent works in any number of dimensions (finite number at least) can be seen as a consequence of the Cauchy-Schwarz inequality. That article proves that the magnitude of the inner (dot) product of two vectors of any dimension is maximized when they are colinear."}, {"text": "Gradient descent is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the gradient (or approximate gradient) of the function at the current point, because this is the direction of steepest descent. Conversely, stepping in the direction of the gradient will lead to a local maximum of that function; the procedure is then known as gradient ascent."}, {"text": "Gradient descent with backpropagation is not guaranteed to find the global minimum of the error function, but only a local minimum; also, it has trouble crossing plateaus in the error function landscape. This issue, caused by the non-convexity of error functions in neural networks, was long thought to be a major drawback, but Yann LeCun et al. argue that in many practical problems, it is not."}, {"text": "Gradient descent with backpropagation is not guaranteed to find the global minimum of the error function, but only a local minimum; also, it has trouble crossing plateaus in the error function landscape. This issue, caused by the non-convexity of error functions in neural networks, was long thought to be a major drawback, but Yann LeCun et al. argue that in many practical problems, it is not."}, {"text": "Gradient descent with backpropagation is not guaranteed to find the global minimum of the error function, but only a local minimum; also, it has trouble crossing plateaus in the error function landscape. This issue, caused by the non-convexity of error functions in neural networks, was long thought to be a major drawback, but Yann LeCun et al. argue that in many practical problems, it is not."}, {"text": "Gradient descent with backpropagation is not guaranteed to find the global minimum of the error function, but only a local minimum; also, it has trouble crossing plateaus in the error function landscape. This issue, caused by the non-convexity of error functions in neural networks, was long thought to be a major drawback, but Yann LeCun et al. argue that in many practical problems, it is not."}]}, {"question": "Is Softmax a loss function", "positive_ctxs": [{"text": "Softmax is an activation function that outputs the probability for each class and these probabilities will sum up to one. Cross Entropy loss is just the sum of the negative logarithm of the probabilities. Therefore, Softmax loss is just these two appended together."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in"}, {"text": "Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in"}, {"text": "Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in"}, {"text": "Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in"}, {"text": "Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in"}, {"text": "Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in"}, {"text": "Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in"}]}, {"question": "What is parametric statistics and nonparametric statistics", "positive_ctxs": [{"text": "Parametric statistics are based on assumptions about the distribution of population from which the sample was taken. Nonparametric statistics are not based on assumptions, that is, the data can be collected from a sample that does not follow a specific distribution."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being distribution-free or having a specified distribution but with the distribution's parameters unspecified. Nonparametric statistics includes both descriptive statistics and statistical inference."}, {"text": "Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being distribution-free or having a specified distribution but with the distribution's parameters unspecified. Nonparametric statistics includes both descriptive statistics and statistical inference."}, {"text": "Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being distribution-free or having a specified distribution but with the distribution's parameters unspecified. Nonparametric statistics includes both descriptive statistics and statistical inference."}, {"text": "Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable. The application of multivariate statistics is multivariate analysis."}, {"text": "Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable. The application of multivariate statistics is multivariate analysis."}, {"text": "It is possible to make statistical inferences without assuming a particular parametric family of probability distributions. In that case, one speaks of non-parametric statistics as opposed to the parametric statistics just described. For example, a test based on Spearman's rank correlation coefficient would be called non-parametric since the statistic is computed from the rank-order of the data disregarding their actual values (and thus regardless of the distribution they were sampled from), whereas those based on the Pearson product-moment correlation coefficient are parametric tests since it is computed directly from the data values and thus estimates the parameter known as the population correlation."}, {"text": "In addition, the concept of power is used to make comparisons between different statistical testing procedures: for example, between a parametric test and a nonparametric test of the same hypothesis."}]}, {"question": "What is dimensional analysis method", "positive_ctxs": [{"text": "Dimensional Analysis (also called Factor-Label Method or the Unit Factor Method) is a problem-solving method that uses the fact that any number or expression can be multiplied by one without changing its value. It is a useful technique."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "In finance, economics, and accounting, dimensional analysis is most commonly referred to in terms of the distinction between stocks and flows. More generally, dimensional analysis is used in interpreting various financial ratios, economics ratios, and accounting ratios."}, {"text": "Siano's orientational analysis is compatible with the conventional conception of angular quantities as being dimensionless, and within orientational analysis, the radian may still be considered a dimensionless unit. The orientational analysis of a quantity equation is carried out separately from the ordinary dimensional analysis, yielding information that supplements the dimensional analysis."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "In fluid mechanics, dimensional analysis is performed in order to obtain dimensionless pi terms or groups. According to the principles of dimensional analysis, any prototype can be described by a series of these terms or groups that describe the behaviour of the system. Using suitable pi terms or groups, it is possible to develop a similar set of pi terms for a model that has the same dimensional relationships."}, {"text": "GDA deals with nonlinear discriminant analysis using kernel function operator. The underlying theory is close to the support vector machines (SVM) insofar as the GDA method provides a mapping of the input vectors into high-dimensional feature space. Similar to LDA, the objective of GDA is to find a projection for the features into a lower dimensional space by maximizing the ratio of between-class scatter to within-class scatter."}, {"text": "GDA deals with nonlinear discriminant analysis using kernel function operator. The underlying theory is close to the support vector machines (SVM) insofar as the GDA method provides a mapping of the input vectors into high-dimensional feature space. Similar to LDA, the objective of GDA is to find a projection for the features into a lower dimensional space by maximizing the ratio of between-class scatter to within-class scatter."}]}, {"question": "Why is test retest reliability important", "positive_ctxs": [{"text": "Having good test re-test reliability signifies the internal validity of a test and ensures that the measurements obtained in one sitting are both representative and stable over time."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Carryover effect, particularly if the interval between test and retest is short. When retested, people may remember their original answer, which could affect answers on the second administration."}, {"text": "As complexity grows, the need arises for a formal reliability function. Because reliability is important to the customer, the customer may even specify certain aspects of the reliability organization."}, {"text": "Because the same test is administered twice and every test is parallel with itself, differences between scores on the test and scores on the retest should be due solely to measurement error. This sort of argument is quite probably true for many physical measurements. However, this argument is often inappropriate for psychological measurement, because it is often impossible to consider the second administration of a test a parallel measure to the first.The second administration of a psychological test might yield systematically different scores than the first administration due to the following reasons:"}, {"text": "Correlating scores on one half of the test with scores on the other half of the testThe correlation between these two split halves is used in estimating the reliability of the test. This halves reliability estimate is then stepped up to the full test length using the Spearman\u2013Brown prediction formula."}, {"text": "The desired level of statistical confidence also plays a role in reliability testing. Statistical confidence is increased by increasing either the test time or the number of items tested. Reliability test plans are designed to achieve the specified reliability at the specified confidence level with the minimum number of test units and test time."}, {"text": "In practice, testing measures are never perfectly consistent. Theories of test reliability have been developed to estimate the effects of inconsistency on the accuracy of measurement. The basic starting point for almost all theories of test reliability is the idea that test scores reflect the influence of two sorts of factors:1."}, {"text": "Each test case is considered by the group and \"scored\" as a success or failure. This scoring is the official result used by the reliability engineer."}]}, {"question": "What is multidimensional scaling in statistics", "positive_ctxs": [{"text": "Multidimensional scaling is a visual representation of distances or dissimilarities between sets of objects. Objects that are more similar (or have shorter distances) are closer together on the graph than objects that are less similar (or have longer distances)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A method based on proximity matrices is one where the data is presented to the algorithm in the form of a similarity matrix or a distance matrix. These methods all fall under the broader class of metric multidimensional scaling. The variations tend to be differences in how the proximity data is computed; for example, Isomap, locally linear embeddings, maximum variance unfolding, and Sammon mapping (which is not in fact a mapping) are examples of metric multidimensional scaling methods."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Related to autoencoders is the NeuroScale algorithm, which uses stress functions inspired by multidimensional scaling and Sammon mappings (see above) to learn a non-linear mapping from the high-dimensional to the embedded space. The mappings in NeuroScale are based on radial basis function networks. Another usage of a neural network for dimensionality reduction is to make it learn the tangent planes in the data."}]}, {"question": "Whats the difference between dimensionality reduction and feature selection", "positive_ctxs": [{"text": "Feature Selection vs Dimensionality Reduction While both methods are used for reducing the number of features in a dataset, there is an important difference. Feature selection is simply selecting and excluding given features without changing them. Dimensionality reduction transforms features into a lower dimension."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The vector space associated with these vectors is often called the feature space. In order to reduce the dimensionality of the feature space, a number of dimensionality reduction techniques can be employed."}, {"text": "The vector space associated with these vectors is often called the feature space. In order to reduce the dimensionality of the feature space, a number of dimensionality reduction techniques can be employed."}, {"text": "enhanced generalization by reducing overfitting (formally, reduction of variance)The central premise when using a feature selection technique is that the data contains some features that are either redundant or irrelevant, and can thus be removed without incurring much loss of information. Redundant and irrelevant are two distinct notions, since one relevant feature may be redundant in the presence of another relevant feature with which it is strongly correlated.Feature selection techniques should be distinguished from feature extraction. Feature extraction creates new features from functions of the original features, whereas feature selection returns a subset of the features."}, {"text": "enhanced generalization by reducing overfitting (formally, reduction of variance)The central premise when using a feature selection technique is that the data contains some features that are either redundant or irrelevant, and can thus be removed without incurring much loss of information. Redundant and irrelevant are two distinct notions, since one relevant feature may be redundant in the presence of another relevant feature with which it is strongly correlated.Feature selection techniques should be distinguished from feature extraction. Feature extraction creates new features from functions of the original features, whereas feature selection returns a subset of the features."}, {"text": "Dimensionality reduction and feature selection can decrease variance by simplifying models. Similarly, a larger training set tends to decrease variance. Adding features (predictors) tends to decrease bias, at the expense of introducing additional variance."}, {"text": "Below is a summary of some of the important algorithms from the history of manifold learning and nonlinear dimensionality reduction (NLDR). Many of these non-linear dimensionality reduction methods are related to the linear methods listed below. Non-linear methods can be broadly classified into two groups: those that provide a mapping (either from the high-dimensional space to the low-dimensional embedding or vice versa), and those that just give a visualisation."}, {"text": "The critical difference between AIC and BIC (and their variants) is the asymptotic property under well-specified and misspecified model classes. Their fundamental differences have been well-studied in regression variable selection and autoregression order selection problems. In general, if the goal is prediction, AIC and leave-one-out cross-validations are preferred."}]}, {"question": "What are the applications of machine learning", "positive_ctxs": [{"text": "Top 10 Machine Learning ApplicationsTraffic Alerts.Social Media.Transportation and Commuting.Products Recommendations.Virtual Personal Assistants.Self Driving Cars.Dynamic Pricing.Google Translate.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "In addition, machine learning has been applied to systems biology problems such as identifying transcription factor binding sites using a technique known as Markov chain optimization. Genetic algorithms, machine learning techniques which are based on the natural process of evolution, have been used to model genetic networks and regulatory structures.Other systems biology applications of machine learning include the task of enzyme function prediction, high throughput microarray data analysis, analysis of genome-wide association studies to better understand markers of disease, protein function prediction."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Automated machine learning (AutoML) is the process of automating the process of applying machine learning to real-world problems. AutoML covers the complete pipeline from the raw dataset to the deployable machine learning model. AutoML was proposed as an artificial intelligence-based solution to the ever-growing challenge of applying machine learning."}, {"text": "Automated machine learning (AutoML) is the process of automating the process of applying machine learning to real-world problems. AutoML covers the complete pipeline from the raw dataset to the deployable machine learning model. AutoML was proposed as an artificial intelligence-based solution to the ever-growing challenge of applying machine learning."}, {"text": "Automated machine learning (AutoML) is the process of automating the process of applying machine learning to real-world problems. AutoML covers the complete pipeline from the raw dataset to the deployable machine learning model. AutoML was proposed as an artificial intelligence-based solution to the ever-growing challenge of applying machine learning."}, {"text": "In the recent years, due to the growing computational power which allows training large ensemble learning in a reasonable time frame, the number of its applications has grown increasingly. Some of the applications of ensemble classifiers include:"}]}, {"question": "What are the composition for agents in artificial intelligence", "positive_ctxs": [{"text": "Explanation: Simple reflex agent is based on the present condition and so it is condition action rule. 5. What are the composition for agents in artificial intelligence? Explanation: An agent program will implement function mapping percepts to actions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "As intelligent agents become more popular, there are increasing legal risks involved.Intelligent agents in artificial intelligence are closely related to agents in economics, and versions of the intelligent agent paradigm are studied in cognitive science, ethics, the philosophy of practical reason, as well as in many interdisciplinary socio-cognitive modeling and computer social simulations."}, {"text": "Anita Woolley presents Collective intelligence as a measure of group intelligence and group creativity. The idea is that a measure of collective intelligence covers a broad range of features of the group, mainly group composition and group interaction. The features of composition that lead to increased levels of collective intelligence in groups include criteria such as higher numbers of women in the group as well as increased diversity of the group.Atlee and P\u00f3r suggest that the field of collective intelligence should primarily be seen as a human enterprise in which mind-sets, a willingness to share and an openness to the value of distributed intelligence for the common good are paramount, though group theory and artificial intelligence have something to offer."}, {"text": "Artificial intelligence, cognitive modeling, and neural networks are information processing paradigms inspired by the way biological neural systems process data. Artificial intelligence and cognitive modeling try to simulate some properties of biological neural networks. In the artificial intelligence field, artificial neural networks have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents (in computer and video games) or autonomous robots."}, {"text": "The term was coined by Eliezer Yudkowsky, who is best known for popularizing the idea, to discuss superintelligent artificial agents that reliably implement human values. Stuart J. Russell and Peter Norvig's leading artificial intelligence textbook, Artificial Intelligence: A Modern Approach, describes the idea:"}, {"text": "The term was coined by Eliezer Yudkowsky, who is best known for popularizing the idea, to discuss superintelligent artificial agents that reliably implement human values. Stuart J. Russell and Peter Norvig's leading artificial intelligence textbook, Artificial Intelligence: A Modern Approach, describes the idea:"}, {"text": "The Air Operations Division (AOD) uses AI for the rule based expert systems. The AOD has use for artificial intelligence for surrogate operators for combat and training simulators, mission management aids, support systems for tactical decision making, and post processing of the simulator data into symbolic summaries.The use of artificial intelligence in simulators is proving to be very useful for the AOD. Airplane simulators are using artificial intelligence in order to process the data taken from simulated flights."}, {"text": "The Air Operations Division (AOD) uses AI for the rule based expert systems. The AOD has use for artificial intelligence for surrogate operators for combat and training simulators, mission management aids, support systems for tactical decision making, and post processing of the simulator data into symbolic summaries.The use of artificial intelligence in simulators is proving to be very useful for the AOD. Airplane simulators are using artificial intelligence in order to process the data taken from simulated flights."}]}, {"question": "What are the advantages and disadvantages of linear regression", "positive_ctxs": [{"text": "Linear regression is a linear method to model the relationship between your independent variables and your dependent variables. Advantages include how simple it is and ease with implementation and disadvantages include how is' lack of practicality and how most problems in our real world aren't \u201clinear\u201d."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Since both advantages and disadvantages present on the two way of modeling, combining both approaches will be a good modeling in practice. For example, in Marras' article A Joint Discriminative Generative Model for Deformable Model Construction and Classification, he and his coauthors apply the combination of two modelings on face classification of the models, and receive a higher accuracy than the traditional approach."}, {"text": "The choice of numerator layout in the introductory sections below does not imply that this is the \"correct\" or \"superior\" choice. There are advantages and disadvantages to the various layout types. Serious mistakes can result from carelessly combining formulas written in different layouts, and converting from one layout to another requires care to avoid errors."}, {"text": "of the predictors) is equivalent to the exponential function of the linear regression expression. This illustrates how the logit serves as a link function between the probability and the linear regression expression. Given that the logit ranges between negative and positive infinity, it provides an adequate criterion upon which to conduct linear regression and the logit is easily converted back into the odds.So we define odds of the dependent variable equaling a case (given some linear combination"}, {"text": "of the predictors) is equivalent to the exponential function of the linear regression expression. This illustrates how the logit serves as a link function between the probability and the linear regression expression. Given that the logit ranges between negative and positive infinity, it provides an adequate criterion upon which to conduct linear regression and the logit is easily converted back into the odds.So we define odds of the dependent variable equaling a case (given some linear combination"}, {"text": "of the predictors) is equivalent to the exponential function of the linear regression expression. This illustrates how the logit serves as a link function between the probability and the linear regression expression. Given that the logit ranges between negative and positive infinity, it provides an adequate criterion upon which to conduct linear regression and the logit is easily converted back into the odds.So we define odds of the dependent variable equaling a case (given some linear combination"}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}]}, {"question": "Where greedy algorithm is used", "positive_ctxs": [{"text": "A greedy algorithm is used to construct a Huffman tree during Huffman coding where it finds an optimal solution. In decision tree learning, greedy algorithms are commonly used, however they are not guaranteed to find the optimal solution. One popular such algorithm is the ID3 algorithm for decision tree construction."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is important, however, to note that the greedy algorithm can be used as a selection algorithm to prioritize options within a search, or branch-and-bound algorithm. There are a few variations to the greedy algorithm:"}, {"text": "In particular, the odd greedy expansion of a fraction x/y is formed by a greedy algorithm of this type in which all denominators are constrained to be odd numbers; it is known that, whenever y is odd, there is a finite Egyptian fraction expansion in which all denominators are odd, but it is not known whether the odd greedy expansion is always finite."}, {"text": "While submodular functions are fitting problems for summarization, they also admit very efficient algorithms for optimization. For example, a simple greedy algorithm admits a constant factor guarantee. Moreover, the greedy algorithm is extremely simple to implement and can scale to large datasets, which is very important for summarization problems."}, {"text": "A greedy algorithm is any algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage. In many problems, a greedy strategy does not usually produce an optimal solution, but nonetheless, a greedy heuristic may yield locally optimal solutions that approximate a globally optimal solution in a reasonable amount of time."}, {"text": "Re-Pair is a greedy algorithm using the strategy of most-frequent-first substitution. The compressive performance is powerful, although the main memory space requirement is very large."}, {"text": "If a greedy algorithm can be proven to yield the global optimum for a given problem class, it typically becomes the method of choice because it is faster than other optimization methods like dynamic programming. Examples of such greedy algorithms are Kruskal's algorithm and Prim's algorithm for finding minimum spanning trees, and the algorithm for finding optimum Huffman trees."}, {"text": "In mathematics, the greedy algorithm for Egyptian fractions is a greedy algorithm, first described by Fibonacci, for transforming rational numbers into Egyptian fractions. An Egyptian fraction is a representation of an irreducible fraction as a sum of distinct unit fractions, as e.g. 5/6 = 1/2 + 1/3."}]}, {"question": "What is backward selection", "positive_ctxs": [{"text": "This approach involves either forward selection, adding features one at a time, or backward selection, removing features one at a time until some criterion is reached. Additionally, a bidirectional selection method is available that involves adding or removing a feature at each step."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "The above system is known as the inverse Wiener-Hopf factor. The backward recursion is the adjoint of the above forward system. The result of the backward pass"}, {"text": "Backward chaining (or backward reasoning) is an inference method described colloquially as working backward from the goal. It is used in automated theorem provers, inference engines, proof assistants, and other artificial intelligence applications.In game theory, researchers apply it to (simpler) subgames to find a solution to the game, in a process called backward induction. In chess, it is called retrograde analysis, and it is used to generate table bases for chess endgames for computer chess."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Backward chaining is a bit less straight forward. In backward chaining the system looks at possible conclusions and works backward to see if they might be true. So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true."}, {"text": "Backward chaining is a bit less straight forward. In backward chaining the system looks at possible conclusions and works backward to see if they might be true. So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true."}]}, {"question": "How is spectral analysis used", "positive_ctxs": [{"text": "Spectroscopy in chemistry and physics, a method of analyzing the properties of matter from their electromagnetic interactions. Spectral estimation, in statistics and signal processing, an algorithm that estimates the strength of different frequency components (the power spectrum) of a time-domain signal."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Maximum entropy spectral estimation is a method of spectral density estimation. The goal is to improve the spectral quality based on the principle of maximum entropy. The method is based on choosing the spectrum which corresponds to the most random or the most unpredictable time series whose autocorrelation function agrees with the known values."}, {"text": "When the energy of the signal is concentrated around a finite time interval, especially if its total energy is finite, one may compute the energy spectral density. More commonly used is the power spectral density (or simply power spectrum), which applies to signals existing over all time, or over a time period large enough (especially in relation to the duration of a measurement) that it could as well have been over an infinite time interval. The power spectral density (PSD) then refers to the spectral energy distribution that would be found per unit time, since the total energy of such a signal over all time would generally be infinite."}, {"text": "The spectral centroid of a signal is the midpoint of its spectral density function, i.e. the frequency that divides the distribution into two equal parts."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Energy spectral density describes how the energy of a signal or a time series is distributed with frequency. Here, the term energy is used in the generalized sense of signal processing; that is, the energy"}, {"text": "In maximum entropy modeling, probability distributions are created on the basis of that which is known, leading to a type of statistical inference about the missing information which is called the maximum entropy estimate. For example, in spectral analysis the expected peak shape is often known, but in a noisy spectrum the center of the peak may not be clear. In such a case, inputting the known information allows the maximum entropy model to derive a better estimate of the center of the peak, thus improving spectral accuracy."}, {"text": "are known.The algorithm is rarely used for solving linear equations, with the conjugate gradient method being one of the most popular alternatives. The number of gradient descent iterations is commonly proportional to the spectral condition number"}]}, {"question": "Does batch size affect accuracy", "positive_ctxs": [{"text": "Batch size controls the accuracy of the estimate of the error gradient when training neural networks. Batch, Stochastic, and Minibatch gradient descent are the three main flavors of the learning algorithm. There is a tension between batch size and the speed and stability of the learning process."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The correlation between batch normalization and internal covariate shift is widely accepted but was not supported by experimental results. Scholars recently show with experiments that the hypothesized relationship is not an accurate one. Rather, the enhanced accuracy with the batch normalization layer seems to be independent of internal covariate shift."}, {"text": "The batch layer precomputes results using a distributed processing system that can handle very large quantities of data. The batch layer aims at perfect accuracy by being able to process all available data when generating views. This means it can fix any errors by recomputing based on the complete data set, then updating existing views."}, {"text": "Batch normalization (also known as batch norm) is a method used to make artificial neural networks faster and more stable through normalization of the input layer by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015.While the effect of batch normalization is evident, the reasons behind its effectiveness remain under discussion. It was believed that it can mitigate the problem of internal covariate shift, where parameter initialization and changes in the distribution of the inputs of each layer affect the learning rate of the network."}, {"text": "The use of different model parameters and different corpus sizes can greatly affect the quality of a word2vec model. Accuracy can be improved in a number of ways, including the choice of model architecture (CBOW or Skip-Gram), increasing the training data set, increasing the number of vector dimensions, and increasing the window size of words considered by the algorithm. Each of these improvements comes with the cost of increased computational complexity and therefore increased model generation time.In models using large corpora and a high number of dimensions, the skip-gram model yields the highest overall accuracy, and consistently produces the highest accuracy on semantic relationships, as well as yielding the highest syntactic accuracy in most cases."}, {"text": "The use of different model parameters and different corpus sizes can greatly affect the quality of a word2vec model. Accuracy can be improved in a number of ways, including the choice of model architecture (CBOW or Skip-Gram), increasing the training data set, increasing the number of vector dimensions, and increasing the window size of words considered by the algorithm. Each of these improvements comes with the cost of increased computational complexity and therefore increased model generation time.In models using large corpora and a high number of dimensions, the skip-gram model yields the highest overall accuracy, and consistently produces the highest accuracy on semantic relationships, as well as yielding the highest syntactic accuracy in most cases."}, {"text": "The use of different model parameters and different corpus sizes can greatly affect the quality of a word2vec model. Accuracy can be improved in a number of ways, including the choice of model architecture (CBOW or Skip-Gram), increasing the training data set, increasing the number of vector dimensions, and increasing the window size of words considered by the algorithm. Each of these improvements comes with the cost of increased computational complexity and therefore increased model generation time.In models using large corpora and a high number of dimensions, the skip-gram model yields the highest overall accuracy, and consistently produces the highest accuracy on semantic relationships, as well as yielding the highest syntactic accuracy in most cases."}, {"text": "The use of different model parameters and different corpus sizes can greatly affect the quality of a word2vec model. Accuracy can be improved in a number of ways, including the choice of model architecture (CBOW or Skip-Gram), increasing the training data set, increasing the number of vector dimensions, and increasing the window size of words considered by the algorithm. Each of these improvements comes with the cost of increased computational complexity and therefore increased model generation time.In models using large corpora and a high number of dimensions, the skip-gram model yields the highest overall accuracy, and consistently produces the highest accuracy on semantic relationships, as well as yielding the highest syntactic accuracy in most cases."}]}, {"question": "How do you use stochastic gradient descent", "positive_ctxs": [{"text": "4:1410:53Suggested clip \u00b7 113 secondsStochastic Gradient Descent, Clearly Explained!!! - YouTubeYouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}]}, {"question": "How do you solve the vanishing gradient problem", "positive_ctxs": [{"text": "The simplest solution is to use other activation functions, such as ReLU, which doesn't cause a small derivative. Residual networks are another solution, as they provide residual connections straight to earlier layers."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "This allows information from the earlier parts of the network to be passed to the deeper parts of the network, helping maintain signal propagation even in deeper networks. Skip connections are a critical component of what allowed successful training of deeper neural networks. ResNets yielded lower training error (and test error) than their shallower counterparts simply by reintroducing outputs from shallower layers in the network to compensate for the vanishing data.Note that ResNets are an ensemble of relatively shallow nets and do not resolve the vanishing gradient problem by preserving gradient flow throughout the entire depth of the network \u2013 rather, they avoid the problem simply by constructing ensembles of many short networks together."}, {"text": "This allows information from the earlier parts of the network to be passed to the deeper parts of the network, helping maintain signal propagation even in deeper networks. Skip connections are a critical component of what allowed successful training of deeper neural networks. ResNets yielded lower training error (and test error) than their shallower counterparts simply by reintroducing outputs from shallower layers in the network to compensate for the vanishing data.Note that ResNets are an ensemble of relatively shallow nets and do not resolve the vanishing gradient problem by preserving gradient flow throughout the entire depth of the network \u2013 rather, they avoid the problem simply by constructing ensembles of many short networks together."}, {"text": "Hardware advances have meant that from 1991 to 2015, computer power (especially as delivered by GPUs) has increased around a million-fold, making standard backpropagation feasible for networks several layers deeper than when the vanishing gradient problem was recognized. Schmidhuber notes that this \"is basically what is winning many of the image recognition competitions now\", but that it \"does not really overcome the problem in a fundamental way\" since the original models tackling the vanishing gradient problem by Hinton and others were trained in a Xeon processor, not GPUs."}, {"text": "Hardware advances have meant that from 1991 to 2015, computer power (especially as delivered by GPUs) has increased around a million-fold, making standard backpropagation feasible for networks several layers deeper than when the vanishing gradient problem was recognized. Schmidhuber notes that this \"is basically what is winning many of the image recognition competitions now\", but that it \"does not really overcome the problem in a fundamental way\" since the original models tackling the vanishing gradient problem by Hinton and others were trained in a Xeon processor, not GPUs."}, {"text": "Behnke relied only on the sign of the gradient (Rprop) when training his Neural Abstraction Pyramid to solve problems like image reconstruction and face localization.Neural networks can also be optimized by using a universal search algorithm on the space of neural network's weights, e.g., random guess or more systematically genetic algorithm. This approach is not based on gradient and avoids the vanishing gradient problem."}, {"text": "Behnke relied only on the sign of the gradient (Rprop) when training his Neural Abstraction Pyramid to solve problems like image reconstruction and face localization.Neural networks can also be optimized by using a universal search algorithm on the space of neural network's weights, e.g., random guess or more systematically genetic algorithm. This approach is not based on gradient and avoids the vanishing gradient problem."}]}, {"question": "What do you know about associative network and frames", "positive_ctxs": [{"text": "In this view, associative networks are fundamentally unorganized lists of features. By specifying what attributes to include, a frame structure promises to provide the \"framework\" upon which to organize and hang what a consumer knows about a product."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Economist Paul Krugman agrees mostly with the Rawlsian approach in that he would like to \"create the society each of us would want if we didn\u2019t know in advance who we\u2019d be\". Krugman elaborated: \"If you admit that life is unfair, and that there's only so much you can do about that at the starting line, then you can try to ameliorate the consequences of that unfairness\"."}, {"text": "Suppose the police officers then stop a driver at random to administer a breathalyzer test. It indicates that the driver is drunk. We assume you do not know anything else about them."}, {"text": "Introduced by Bart Kosko, a bidirectional associative memory (BAM) network is a variant of a Hopfield network that stores associative data as a vector. The bi-directionality comes from passing information through a matrix and its transpose. Typically, bipolar encoding is preferred to binary encoding of the associative pairs."}, {"text": "Introduced by Bart Kosko, a bidirectional associative memory (BAM) network is a variant of a Hopfield network that stores associative data as a vector. The bi-directionality comes from passing information through a matrix and its transpose. Typically, bipolar encoding is preferred to binary encoding of the associative pairs."}, {"text": "Introduced by Bart Kosko, a bidirectional associative memory (BAM) network is a variant of a Hopfield network that stores associative data as a vector. The bi-directionality comes from passing information through a matrix and its transpose. Typically, bipolar encoding is preferred to binary encoding of the associative pairs."}, {"text": "Introduced by Bart Kosko, a bidirectional associative memory (BAM) network is a variant of a Hopfield network that stores associative data as a vector. The bi-directionality comes from passing information through a matrix and its transpose. Typically, bipolar encoding is preferred to binary encoding of the associative pairs."}]}, {"question": "How do you prove an estimator is unbiased", "positive_ctxs": [{"text": "You might also see this written as something like \u201cAn unbiased estimator is when the mean of the statistic's sampling distribution is equal to the population's parameter.\u201d This essentially means the same thing: if the statistic equals the parameter, then it's unbiased."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Thus e(T) is the minimum possible variance for an unbiased estimator divided by its actual variance. The Cram\u00e9r\u2013Rao bound can be used to prove that e(T) \u2264 1."}, {"text": "Thus e(T) is the minimum possible variance for an unbiased estimator divided by its actual variance. The Cram\u00e9r\u2013Rao bound can be used to prove that e(T) \u2264 1."}, {"text": "Thus e(T) is the minimum possible variance for an unbiased estimator divided by its actual variance. The Cram\u00e9r\u2013Rao bound can be used to prove that e(T) \u2264 1."}, {"text": "All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as in unbiased estimation of standard deviation); because an estimator is median-unbiased but not mean-unbiased (or the reverse); because a biased estimator gives a lower value of some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful."}, {"text": "All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as in unbiased estimation of standard deviation); because an estimator is median-unbiased but not mean-unbiased (or the reverse); because a biased estimator gives a lower value of some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful."}, {"text": "for all values of the parameter, then the estimator is called efficient.Equivalently, the estimator achieves equality in the Cram\u00e9r\u2013Rao inequality for all \u03b8. The Cram\u00e9r\u2013Rao lower bound is a lower bound of the variance of an unbiased estimator, representing the \"best\" an unbiased estimator can be."}]}, {"question": "When would you use a bivariate correlation", "positive_ctxs": [{"text": "You can use a bivariate Pearson Correlation to test whether there is a statistically significant linear relationship between height and weight, and to determine the strength and direction of the association."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The sample correlation coefficient r is not an unbiased estimate of \u03c1. For data that follows a bivariate normal distribution, the expectation E[r] for the sample correlation coefficient r of a normal bivariate is"}, {"text": "The sample correlation coefficient r is not an unbiased estimate of \u03c1. For data that follows a bivariate normal distribution, the expectation E[r] for the sample correlation coefficient r of a normal bivariate is"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "For pairs from an uncorrelated bivariate normal distribution, the sampling distribution of a certain function of Pearson's correlation coefficient follows Student's t-distribution with degrees of freedom n \u2212 2. Specifically, if the underlying variables are white and have a bivariate normal distribution, the variable"}, {"text": "For pairs from an uncorrelated bivariate normal distribution, the sampling distribution of a certain function of Pearson's correlation coefficient follows Student's t-distribution with degrees of freedom n \u2212 2. Specifically, if the underlying variables are white and have a bivariate normal distribution, the variable"}, {"text": "From 1912 to 1934 Gosset and Fisher would exchange more than 150 letters. In 1924, Gosset wrote in a letter to Fisher, \"I am sending you a copy of Student's Tables as you are the only man that's ever likely to use them!\" Fisher believed that Gosset had effected a \"logical revolution\"."}]}, {"question": "Is Monte Carlo Tree Search reinforcement learning", "positive_ctxs": [{"text": "Fuelled by successes in Computer Go, Monte Carlo tree search (MCTS) has achieved widespread adoption within the games community. Its links to traditional reinforcement learning (RL) methods have been outlined in the past; however, the use of RL techniques within tree search has not been thoroughly studied yet."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "There is no consensus on how Monte Carlo should be defined. For example, Ripley defines most probabilistic modeling as stochastic simulation, with Monte Carlo being reserved for Monte Carlo integration and Monte Carlo statistical tests. Sawilowsky distinguishes between a simulation, a Monte Carlo method, and a Monte Carlo simulation: a simulation is a fictitious representation of reality, a Monte Carlo method is a technique that can be used to solve a mathematical or statistical problem, and a Monte Carlo simulation uses repeated sampling to obtain the statistical properties of some phenomenon (or behavior)."}, {"text": "There is no consensus on how Monte Carlo should be defined. For example, Ripley defines most probabilistic modeling as stochastic simulation, with Monte Carlo being reserved for Monte Carlo integration and Monte Carlo statistical tests. Sawilowsky distinguishes between a simulation, a Monte Carlo method, and a Monte Carlo simulation: a simulation is a fictitious representation of reality, a Monte Carlo method is a technique that can be used to solve a mathematical or statistical problem, and a Monte Carlo simulation uses repeated sampling to obtain the statistical properties of some phenomenon (or behavior)."}, {"text": "For example, the dynamic programming algorithms described in the next section require an explicit model, and Monte Carlo tree search requires a generative model (or an episodic simulator that can be copied at any state), whereas most reinforcement learning algorithms require only an episodic simulator."}, {"text": "Temporal difference (TD) learning refers to a class of model-free reinforcement learning methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, like Monte Carlo methods, and perform updates based on current estimates, like dynamic programming methods.While Monte Carlo methods only adjust their estimates once the final outcome is known, TD methods adjust predictions to match later, more accurate, predictions about the future before the final outcome is known. This is a form of bootstrapping, as illustrated with the following example:"}, {"text": "Temporal difference (TD) learning refers to a class of model-free reinforcement learning methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, like Monte Carlo methods, and perform updates based on current estimates, like dynamic programming methods.While Monte Carlo methods only adjust their estimates once the final outcome is known, TD methods adjust predictions to match later, more accurate, predictions about the future before the final outcome is known. This is a form of bootstrapping, as illustrated with the following example:"}, {"text": "In contrast to traditional Markov chain Monte Carlo methods, the precision parameter of this class of interacting Markov chain Monte Carlo samplers is only related to the number of interacting Markov chain Monte Carlo samplers. These advanced particle methodologies belong to the class of Feynman-Kac particle models, also called Sequential Monte Carlo or particle filter methods in Bayesian inference and signal processing communities. Interacting Markov chain Monte Carlo methods can also be interpreted as a mutation-selection genetic particle algorithm with Markov chain Monte Carlo mutations."}, {"text": "In principle, any Markov chain Monte Carlo sampler can be turned into an interacting Markov chain Monte Carlo sampler. These interacting Markov chain Monte Carlo samplers can be interpreted as a way to run in parallel a sequence of Markov chain Monte Carlo samplers. For instance, interacting simulated annealing algorithms are based on independent Metropolis-Hastings moves interacting sequentially with a selection-resampling type mechanism."}]}, {"question": "What are latent variables in machine learning", "positive_ctxs": [{"text": "A latent variable is a random variable which you can't observe neither in training nor in test phase . It is derived from the latin word lat\u0113re which means hidden. Intuitionally, some phenomenons like incidences,altruism one can't measure while others like speed or height one can."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In factor analysis and latent trait analysis the latent variables are treated as continuous normally distributed variables, and in latent profile analysis and latent class analysis as from a multinomial distribution. The manifest variables in factor analysis and latent profile analysis are continuous and in most cases, their conditional distribution given the latent variables is assumed to be normal. In latent trait analysis and latent class analysis, the manifest variables are discrete."}, {"text": "In factor analysis and latent trait analysis the latent variables are treated as continuous normally distributed variables, and in latent profile analysis and latent class analysis as from a multinomial distribution. The manifest variables in factor analysis and latent profile analysis are continuous and in most cases, their conditional distribution given the latent variables is assumed to be normal. In latent trait analysis and latent class analysis, the manifest variables are discrete."}, {"text": "In statistics, latent variables (from Latin: present participle of lateo (\u201clie hidden\u201d), as opposed to observable variables) are variables that are not directly observed but are rather inferred (through a mathematical model) from other variables that are observed (directly measured). Mathematical models that aim to explain observed variables in terms of latent variables are called latent variable models. Latent variable models are used in many disciplines, including psychology, demography, economics, engineering, medicine, physics, machine learning/artificial intelligence, bioinformatics, chemometrics, natural language processing, econometrics, management and the social sciences."}, {"text": "In statistics, latent variables (from Latin: present participle of lateo (\u201clie hidden\u201d), as opposed to observable variables) are variables that are not directly observed but are rather inferred (through a mathematical model) from other variables that are observed (directly measured). Mathematical models that aim to explain observed variables in terms of latent variables are called latent variable models. Latent variable models are used in many disciplines, including psychology, demography, economics, engineering, medicine, physics, machine learning/artificial intelligence, bioinformatics, chemometrics, natural language processing, econometrics, management and the social sciences."}, {"text": "In statistics, latent variables (from Latin: present participle of lateo (\u201clie hidden\u201d), as opposed to observable variables) are variables that are not directly observed but are rather inferred (through a mathematical model) from other variables that are observed (directly measured). Mathematical models that aim to explain observed variables in terms of latent variables are called latent variable models. Latent variable models are used in many disciplines, including psychology, demography, economics, engineering, medicine, physics, machine learning/artificial intelligence, bioinformatics, chemometrics, natural language processing, econometrics, management and the social sciences."}, {"text": "In machine learning, the CP-decomposition is the central ingredient in learning probabilistic latent variables models via the technique of moment-matching. For example, consider the multi-view model which is a probabilistic latent variable model. In this model, the generation of samples are posited as follows: there exists a hidden random variable that is not observed directly, given which, there are several conditionally independent random variables known as the different \"views\" of the hidden variable."}, {"text": "can usually be simplified into a function of the fixed hyperparameters of the prior distributions over the latent variables and of expectations (and sometimes higher moments such as the variance) of latent variables not in the current partition (i.e. latent variables not included in"}]}, {"question": "How do you evaluate machine learning algorithms", "positive_ctxs": [{"text": "The three main metrics used to evaluate a classification model are accuracy, precision, and recall. Accuracy is defined as the percentage of correct predictions for the test data. It can be calculated easily by dividing the number of correct predictions by the number of total predictions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Among the class of iterative algorithms are the training algorithms for machine learning systems, which formed the initial impetus for developing Apache Spark.Apache Spark requires a cluster manager and a distributed storage system. For cluster management, Spark supports standalone (native Spark cluster, where you can launch a cluster either manually or use the launch scripts provided by the install package. It is also possible to run these daemons on a single machine for testing), Hadoop YARN, Apache Mesos or Kubernetes."}, {"text": "The following question was posed to Jeff Hawkins in September 2011 with regard to cortical learning algorithms: \"How do you know if the changes you are making to the model are good or not?\" To which Jeff's response was \"There are two categories for the answer: one is to look at neuroscience, and the other is methods for machine intelligence. In the neuroscience realm, there are many predictions that we can make, and those can be tested."}, {"text": "Suppose, for example, you have a very imbalanced validation set made of 100 elements, 95 of which are positive elements, and only 5 are negative elements (as explained in Tip 5). And suppose also you made some mistakes in designing and training your machine learning classifier, and now you have an algorithm which always predicts positive. Imagine that you are not aware of this issue."}, {"text": "High-quality labeled training datasets for supervised and semi-supervised machine learning algorithms are usually difficult and expensive to produce because of the large amount of time needed to label the data. Although they do not need to be labeled, high-quality datasets for unsupervised learning can also be difficult and costly to produce."}, {"text": "High-quality labeled training datasets for supervised and semi-supervised machine learning algorithms are usually difficult and expensive to produce because of the large amount of time needed to label the data. Although they do not need to be labeled, high-quality datasets for unsupervised learning can also be difficult and costly to produce."}]}, {"question": "What is CRF in machine learning", "positive_ctxs": [{"text": "Conditional random fields (CRFs) are a class of statistical modeling method often applied in pattern recognition and machine learning and used for structured prediction. Whereas a classifier predicts a label for a single sample without considering \"neighboring\" samples, a CRF can take context into account."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Conditional random fields (CRFs) are a class of statistical modeling method often applied in pattern recognition and machine learning and used for structured prediction. Whereas a classifier predicts a label for a single sample without considering \"neighboring\" samples, a CRF can take context into account. To do so, the prediction is modeled as a graphical model, which implements dependencies between the predictions."}, {"text": "For general graphs, the problem of exact inference in CRFs is intractable. The inference problem for a CRF is basically the same as for an MRF and the same arguments hold."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "How do you prove a statistic is sufficient", "positive_ctxs": [{"text": "Formally, a statistic T(X1,\u00b7\u00b7\u00b7,Xn) is said to be sufficient for \u03b8 if the conditional distribution of X1,\u00b7\u00b7\u00b7,Xn, given T = t, does not depend on \u03b8 for any value of t. In other words, given the value of T, we can gain no more knowledge about \u03b8 from knowing more about the probability distribution of X1,\u00b7\u00b7\u00b7,Xn."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": ", denote a random sample from a distribution having the pdf f(x, \u03b8) for \u03b9 < \u03b8 < \u03b4. Let Y1 = u1(X1, X2, ..., Xn) be a statistic whose pdf is g1(y1; \u03b8). What we want to prove is that Y1 = u1(X1, X2, ..., Xn) is a sufficient statistic for \u03b8 if and only if, for some function H,"}, {"text": "If there exists a minimal sufficient statistic, and this is usually the case, then every complete sufficient statistic is necessarily minimal sufficient(note that this statement does not exclude the option of a pathological case in which a complete sufficient exists while there is no minimal sufficient statistic). While it is hard to find cases in which a minimal sufficient statistic does not exist, it is not so hard to find cases in which there is no complete statistic."}, {"text": "For some parametric families, a complete sufficient statistic does not exist (for example, see Galili and Meilijson 2016 ). Also, a minimal sufficient statistic need not exist. (A case in which there is no minimal sufficient statistic was shown by Bahadur in 1957.)"}, {"text": "A sufficient statistic is minimal sufficient if it can be represented as a function of any other sufficient statistic. In other words, S(X) is minimal sufficient if and only if"}, {"text": "In statistics, a statistic is sufficient with respect to a statistical model and its associated unknown parameter if \"no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter\". In particular, a statistic is sufficient for a family of probability distributions if the sample from which it is calculated gives no additional information than the statistic, as to which of those probability distributions is the sampling distribution."}, {"text": "Bounded completeness also occurs in Bahadur's theorem. In the case where there exists at least one minimal sufficient statistic, a statistic which is sufficient and boundedly complete, is necessarily minimal sufficient."}]}, {"question": "What is outlier detection in machine learning", "positive_ctxs": [{"text": "Anomaly detection (or outlier detection) is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In technology, the principle became important for radar detection methods during the Cold War, where unusual aircraft-reflection patterns could indicate an attack by a new type of aircraft. Today, the phenomenon plays an important role in machine learning and data science, where the corresponding methods are known as anomaly detection or outlier detection. An extensive methodological overview is given by Markou and Singh."}, {"text": "T-distributed Stochastic Neighbor Embedding (t-SNE) is a non-linear dimensionality reduction technique useful for visualization of high-dimensional datasets. It is not recommended for use in analysis such as clustering or outlier detection since it does not necessarily preserve densities or distances well."}, {"text": "T-distributed Stochastic Neighbor Embedding (t-SNE) is a non-linear dimensionality reduction technique useful for visualization of high-dimensional datasets. It is not recommended for use in analysis such as clustering or outlier detection since it does not necessarily preserve densities or distances well."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "In statistics, an outlier is a data point that differs significantly from other observations. An outlier may be due to variability in the measurement or it may indicate experimental error; the latter are sometimes excluded from the data set. An outlier can cause serious problems in statistical analyses."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}]}, {"question": "What are some Machine Learning techniques for objective optimization", "positive_ctxs": [{"text": "Machine learning usually has to achieve multiple targets, which are often conflicting with each other. Multi-objective model selection to improve the performance of learning models, such as neural networks, support vector machines, decision trees, and fuzzy systems."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "General performance.Here some test functions are presented with the aim of giving an idea about the different situations that optimization algorithms have to face when coping with these kinds of problems. In the first part, some objective functions for single-objective optimization cases are presented. In the second part, test functions with their respective Pareto fronts for multi-objective optimization problems (MOP) are given."}, {"text": "More generally, optimization includes finding \"best available\" values of some objective function given a defined domain (or input), including a variety of different types of objective functions and different types of domains."}, {"text": "Bifet, Albert; Gavald\u00e0, Ricard; Holmes, Geoff; Pfahringer, Bernhard (2018). Machine Learning for Data Streams with Practical Examples in MOA. Adaptive Computation and Machine Learning."}, {"text": "Vector optimization is a subarea of mathematical optimization where optimization problems with a vector-valued objective functions are optimized with respect to a given partial ordering and subject to certain constraints. A multi-objective optimization problem is a special case of a vector optimization problem: The objective space is the finite dimensional Euclidean space partially ordered by the component-wise \"less than or equal to\" ordering."}, {"text": "To understand the Bellman equation, several underlying concepts must be understood. First, any optimization problem has some objective: minimizing travel time, minimizing cost, maximizing profits, maximizing utility, etc. The mathematical function that describes this objective is called the objective function."}, {"text": "In February 2017, IBM announced the first Machine Learning Hub in Silicon Valley to share expertise and teach companies about machine learning and data science In April 2017 they expanded to Toronto, Beijing, and Stuttgart. A fifth Machine Learning Hub was created in August 2017 in India, Bongalore."}, {"text": "Ellipsoid method: An iterative method for small problems with quasiconvex objective functions and of great theoretical interest, particularly in establishing the polynomial time complexity of some combinatorial optimization problems. It has similarities with Quasi-Newton methods."}]}, {"question": "How do you know if a bivariate is normal distribution", "positive_ctxs": [{"text": "Two random variables X and Y are said to be bivariate normal, or jointly normal, if aX+bY has a normal distribution for all a,b\u2208R. In the above definition, if we let a=b=0, then aX+bY=0. We agree that the constant zero is a normal random variable with mean and variance 0."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "For pairs from an uncorrelated bivariate normal distribution, the sampling distribution of a certain function of Pearson's correlation coefficient follows Student's t-distribution with degrees of freedom n \u2212 2. Specifically, if the underlying variables are white and have a bivariate normal distribution, the variable"}, {"text": "For pairs from an uncorrelated bivariate normal distribution, the sampling distribution of a certain function of Pearson's correlation coefficient follows Student's t-distribution with degrees of freedom n \u2212 2. Specifically, if the underlying variables are white and have a bivariate normal distribution, the variable"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "The sample correlation coefficient r is not an unbiased estimate of \u03c1. For data that follows a bivariate normal distribution, the expectation E[r] for the sample correlation coefficient r of a normal bivariate is"}, {"text": "The sample correlation coefficient r is not an unbiased estimate of \u03c1. For data that follows a bivariate normal distribution, the expectation E[r] for the sample correlation coefficient r of a normal bivariate is"}, {"text": "Suppose the police officers then stop a driver at random to administer a breathalyzer test. It indicates that the driver is drunk. We assume you do not know anything else about them."}]}, {"question": "Where do we use eigenvalues", "positive_ctxs": [{"text": "The eigenvalues and eigenvectors of a matrix are often used in the analysis of financial data and are integral in extracting useful information from the raw data. They can be used for predicting stock prices and analyzing correlations between various stocks, corresponding to different companies."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "These metaphors are prevalent in communication and we do not just use them in language; we actually perceive and act in accordance with the metaphors."}, {"text": "Another possibility is the randomized setting. For some problems we can break the curse of dimensionality by weakening the assurance; for others, we cannot. There is a large IBC literature on results in various settings; see Where to Learn More below."}, {"text": "It is a common practice to use a one-tailed hypothesis by default. However, \"If you do not have a specific direction firmly in mind in advance, use a two-sided alternative. Moreover, some users of statistics argue that we should always work with the two-sided alternative."}, {"text": "To find the axes of the ellipsoid, we must first subtract the mean of each variable from the dataset to center the data around the origin. Then, we compute the covariance matrix of the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we must normalize each of the orthogonal eigenvectors to turn them into unit vectors."}, {"text": "To find the axes of the ellipsoid, we must first subtract the mean of each variable from the dataset to center the data around the origin. Then, we compute the covariance matrix of the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we must normalize each of the orthogonal eigenvectors to turn them into unit vectors."}, {"text": "To find the axes of the ellipsoid, we must first subtract the mean of each variable from the dataset to center the data around the origin. Then, we compute the covariance matrix of the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we must normalize each of the orthogonal eigenvectors to turn them into unit vectors."}, {"text": "To find the axes of the ellipsoid, we must first subtract the mean of each variable from the dataset to center the data around the origin. Then, we compute the covariance matrix of the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we must normalize each of the orthogonal eigenvectors to turn them into unit vectors."}]}, {"question": "Why would you use a nonparametric test", "positive_ctxs": [{"text": "Nonparametric tests are sometimes called distribution-free tests because they are based on fewer assumptions (e.g., they do not assume that the outcome is approximately normally distributed). There are several statistical tests that can be used to assess whether data are likely from a normal distribution."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is an anomaly for a small city to field such a good team. the soccer scores and great soccer team) indirectly described a condition by which the observer inferred a new meaningful pattern\u2014that the small city was no longer small. Why would you put a large city of your best and brightest in the middle of nowhere?"}, {"text": "In addition, the concept of power is used to make comparisons between different statistical testing procedures: for example, between a parametric test and a nonparametric test of the same hypothesis."}, {"text": "\"You cannot legitimately test a hypothesis on the same data that first suggested that hypothesis. Once you have a hypothesis, design a study to search specifically for the effect you now think is there. If the result of this test is statistically significant, you have real evidence at last.\""}, {"text": "A Wilcoxon signed-rank test is a nonparametric test that can be used to determine whether two dependent samples were selected from populations having the same distribution."}, {"text": "From 1912 to 1934 Gosset and Fisher would exchange more than 150 letters. In 1924, Gosset wrote in a letter to Fisher, \"I am sending you a copy of Student's Tables as you are the only man that's ever likely to use them!\" Fisher believed that Gosset had effected a \"logical revolution\"."}, {"text": "This and other work by Arbuthnot is credited as \"\u2026 the first use of significance tests \u2026\" the first example of reasoning about statistical significance, and \"\u2026 perhaps the first published report of a nonparametric test \u2026\", specifically the sign test; see details at Sign test \u00a7 History."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "What are the differences between autoregressive and moving average models", "positive_ctxs": [{"text": "Rather than using the past values of the forecast variable in a regression, a moving average model uses past forecast errors in a regression-like model. While, the autoregressive model(AR) uses the past forecasts to predict future values."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Combinations of these ideas produce autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) models. The autoregressive fractionally integrated moving average (ARFIMA) model generalizes the former three. Extensions of these classes to deal with vector-valued data are available under the heading of multivariate time-series models and sometimes the preceding acronyms are extended by including an initial \"V\" for \"vector\", as in VAR for vector autoregression."}, {"text": "Combinations of these ideas produce autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) models. The autoregressive fractionally integrated moving average (ARFIMA) model generalizes the former three. Extensions of these classes to deal with vector-valued data are available under the heading of multivariate time-series models and sometimes the preceding acronyms are extended by including an initial \"V\" for \"vector\", as in VAR for vector autoregression."}, {"text": "Combinations of these ideas produce autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) models. The autoregressive fractionally integrated moving average (ARFIMA) model generalizes the former three. Extensions of these classes to deal with vector-valued data are available under the heading of multivariate time-series models and sometimes the preceding acronyms are extended by including an initial \"V\" for \"vector\", as in VAR for vector autoregression."}, {"text": "An example of a discrete-time stationary process where the sample space is also discrete (so that the random variable may take one of N possible values) is a Bernoulli scheme. Other examples of a discrete-time stationary process with continuous sample space include some autoregressive and moving average processes which are both subsets of the autoregressive moving average model. Models with a non-trivial autoregressive component may be either stationary or non-stationary, depending on the parameter values, and important non-stationary special cases are where unit roots exist in the model."}, {"text": "In regression analysis using time series data, autocorrelation in a variable of interest is typically modeled either with an autoregressive model (AR), a moving average model (MA), their combination as an autoregressive-moving-average model (ARMA), or an extension of the latter called an autoregressive integrated moving average model (ARIMA). With multiple interrelated data series, vector autoregression (VAR) or its extensions are used."}, {"text": "In regression analysis using time series data, autocorrelation in a variable of interest is typically modeled either with an autoregressive model (AR), a moving average model (MA), their combination as an autoregressive-moving-average model (ARMA), or an extension of the latter called an autoregressive integrated moving average model (ARIMA). With multiple interrelated data series, vector autoregression (VAR) or its extensions are used."}, {"text": "Models for time series data can have many forms and represent different stochastic processes. When modeling variations in the level of a process, three broad classes of practical importance are the autoregressive (AR) models, the integrated (I) models, and the moving average (MA) models. These three classes depend linearly on previous data points."}]}, {"question": "What do the coefficients in logistic regression mean", "positive_ctxs": [{"text": "Coef. A regression coefficient describes the size and direction of the relationship between a predictor and the response variable. Coefficients are the numbers by which the values of the term are multiplied in a regression equation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "is a regression coefficient associated with the mth explanatory variable and the kth outcome. As explained in the logistic regression article, the regression coefficients and explanatory variables are normally grouped into vectors of size M+1, so that the predictor function can be written more compactly:"}, {"text": "is a regression coefficient associated with the mth explanatory variable and the kth outcome. As explained in the logistic regression article, the regression coefficients and explanatory variables are normally grouped into vectors of size M+1, so that the predictor function can be written more compactly:"}, {"text": "After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor."}, {"text": "After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor."}, {"text": "After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor."}, {"text": "In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient \u2013 the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t test."}, {"text": "In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient \u2013 the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t test."}]}, {"question": "How does ReLU solve vanishing gradient problem", "positive_ctxs": [{"text": "RELU activation solves this by having a gradient slope of 1, so during backpropagation, there isn't gradients passed back that are progressively getting smaller and smaller. but instead they are staying the same, which is how RELU solves the vanishing gradient problem."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Dying ReLU problem: ReLU neurons can sometimes be pushed into states in which they become inactive for essentially all inputs. In this state, no gradients flow backward through the neuron, and so the neuron becomes stuck in a perpetually inactive state and \"dies\". This is a form of the vanishing gradient problem."}, {"text": "Dying ReLU problem: ReLU neurons can sometimes be pushed into states in which they become inactive for essentially all inputs. In this state, no gradients flow backward through the neuron, and so the neuron becomes stuck in a perpetually inactive state and \"dies\". This is a form of the vanishing gradient problem."}, {"text": "Hardware advances have meant that from 1991 to 2015, computer power (especially as delivered by GPUs) has increased around a million-fold, making standard backpropagation feasible for networks several layers deeper than when the vanishing gradient problem was recognized. Schmidhuber notes that this \"is basically what is winning many of the image recognition competitions now\", but that it \"does not really overcome the problem in a fundamental way\" since the original models tackling the vanishing gradient problem by Hinton and others were trained in a Xeon processor, not GPUs."}, {"text": "Hardware advances have meant that from 1991 to 2015, computer power (especially as delivered by GPUs) has increased around a million-fold, making standard backpropagation feasible for networks several layers deeper than when the vanishing gradient problem was recognized. Schmidhuber notes that this \"is basically what is winning many of the image recognition competitions now\", but that it \"does not really overcome the problem in a fundamental way\" since the original models tackling the vanishing gradient problem by Hinton and others were trained in a Xeon processor, not GPUs."}, {"text": "Behnke relied only on the sign of the gradient (Rprop) when training his Neural Abstraction Pyramid to solve problems like image reconstruction and face localization.Neural networks can also be optimized by using a universal search algorithm on the space of neural network's weights, e.g., random guess or more systematically genetic algorithm. This approach is not based on gradient and avoids the vanishing gradient problem."}, {"text": "Behnke relied only on the sign of the gradient (Rprop) when training his Neural Abstraction Pyramid to solve problems like image reconstruction and face localization.Neural networks can also be optimized by using a universal search algorithm on the space of neural network's weights, e.g., random guess or more systematically genetic algorithm. This approach is not based on gradient and avoids the vanishing gradient problem."}, {"text": "Long short-term memory (LSTM) is a deep learning system that avoids the vanishing gradient problem. LSTM is normally augmented by recurrent gates called \u201cforget gates\u201d. LSTM prevents backpropagated errors from vanishing or exploding."}]}, {"question": "How do you calculate the likelihood ratio", "positive_ctxs": [{"text": "As you have seen, in order to perform a likelihood ratio test, one must estimate both of the models one wishes to compare. The advantage of the Wald and Lagrange multiplier (or score) tests is that they approximate the LR test, but require that only one model be estimated."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "The numerator corresponds to the likelihood of an observed outcome under the null hypothesis. The denominator corresponds to the maximum likelihood of an observed outcome, varying parameters over the whole parameter space. The numerator of this ratio is less than the denominator; so, the likelihood ratio is between 0 and 1."}, {"text": "Odds provide a measure of the likelihood of a particular outcome. They are calculated as the ratio of the number of events that produce that outcome to the number that do not. Odds are commonly used in gambling and statistics."}, {"text": "One can take ratios of a complementary pair of ratios, yielding four likelihood ratios (two column ratio of ratios, two row ratio of ratios). This is primarily done for the column (condition) ratios, yielding likelihood ratios in diagnostic testing. Taking the ratio of one of these groups of ratios yields a final ratio, the diagnostic odds ratio (DOR)."}, {"text": "propose a p-value derived from the likelihood ratio test based on the conditional distribution of the odds ratio given the marginal success rate. This p-value is inferentially consistent with classical tests of normally distributed data as well as with likelihood ratios and support intervals based on this conditional likelihood function. It is also readily computable."}]}, {"question": "Why is my ROC curve inverted", "positive_ctxs": [{"text": "Answer. When the ROC curve dips prominently into the lower right half of the graph, this is likely a sign that either the wrong State Value has been specified or the wrong Test-State association direction has been specified in the \"Test Direction\" area of the \"ROC Curve:Options\" dialog."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In logic and mathematics, contraposition refers to the inference of going from a conditional statement into its logically equivalent contrapositive, and an associated proof method known as proof by contraposition. The contrapositive of a statement has its antecedent and consequent inverted and flipped. For instance, the contrapositive of the conditional statement \"If it is raining, then I wear my coat\" is the statement \"If I don't wear my coat, then it isn't raining.\""}, {"text": "For example, one could focus on the region of the curve with low false positive rate, which is often of prime interest for population screening tests. Another common approach for classification problems in which P \u226a N (common in bioinformatics applications) is to use a logarithmic scale for the x-axis.The ROC area under the curve is also called c-statistic or c statistic."}, {"text": "For example, one could focus on the region of the curve with low false positive rate, which is often of prime interest for population screening tests. Another common approach for classification problems in which P \u226a N (common in bioinformatics applications) is to use a logarithmic scale for the x-axis.The ROC area under the curve is also called c-statistic or c statistic."}, {"text": "For example, one could focus on the region of the curve with low false positive rate, which is often of prime interest for population screening tests. Another common approach for classification problems in which P \u226a N (common in bioinformatics applications) is to use a logarithmic scale for the x-axis.The ROC area under the curve is also called c-statistic or c statistic."}, {"text": "For example, one could focus on the region of the curve with low false positive rate, which is often of prime interest for population screening tests. Another common approach for classification problems in which P \u226a N (common in bioinformatics applications) is to use a logarithmic scale for the x-axis.The ROC area under the curve is also called c-statistic or c statistic."}, {"text": "For example, one could focus on the region of the curve with low false positive rate, which is often of prime interest for population screening tests. Another common approach for classification problems in which P \u226a N (common in bioinformatics applications) is to use a logarithmic scale for the x-axis.The ROC area under the curve is also called c-statistic or c statistic."}, {"text": "For example, one could focus on the region of the curve with low false positive rate, which is often of prime interest for population screening tests. Another common approach for classification problems in which P \u226a N (common in bioinformatics applications) is to use a logarithmic scale for the x-axis.The ROC area under the curve is also called c-statistic or c statistic."}]}, {"question": "What are the least squares assumptions", "positive_ctxs": [{"text": "The Least Squares AssumptionsUseful Books for This Topic: ASSUMPTION #1: The conditional distribution of a given error term given a level of an independent variable x has a mean of zero. ASSUMPTION #2: (X,Y) for all n are independently and identically distributed. ASSUMPTION #3: Large outliers are unlikely.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Ordinary least squares (OLS) is often used for estimation since it provides the BLUE or \"best linear unbiased estimator\" (where \"best\" means most efficient, unbiased estimator) given the Gauss-Markov assumptions. When these assumptions are violated or other statistical properties are desired, other estimation techniques such as maximum likelihood estimation, generalized method of moments, or generalized least squares are used. Estimators that incorporate prior beliefs are advocated by those who favour Bayesian statistics over traditional, classical or \"frequentist\" approaches."}, {"text": "as the step size is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and"}, {"text": "as the step size is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and"}, {"text": "as the step size is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and"}, {"text": "as the step size is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and"}, {"text": "as the step size is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and"}, {"text": "as the step size is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and"}]}, {"question": "What is the relationship between tensorflow with keras", "positive_ctxs": [{"text": "Keras is a neural network library while TensorFlow is the open-source library for a number of various tasks in machine learning. TensorFlow provides both high-level and low-level APIs while Keras provides only high-level APIs."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "When two or more random variables are defined on a probability space, it is useful to describe how they vary together; that is, it is useful to measure the relationship between the variables. A common measure of the relationship between two random variables is the covariance. Covariance is a measure of linear relationship between the random variables."}, {"text": "However, a better experiment is to compute the natural direct effect. (NDE) This is the effect determined by leaving the relationship between X and M untouched while intervening on the relationship between X and Y."}, {"text": "In statistics, collinearity refers to a linear relationship between two explanatory variables. Two variables are perfectly collinear if there is an exact linear relationship between the two, so the correlation between them is equal to 1 or \u22121."}, {"text": "There is no connection between A and B; the correlation is a coincidence.Thus there can be no conclusion made regarding the existence or the direction of a cause-and-effect relationship only from the fact that A and B are correlated. Determining whether there is an actual cause-and-effect relationship requires further investigation, even when the relationship between A and B is statistically significant, a large effect size is observed, or a large part of the variance is explained."}, {"text": "There is no connection between A and B; the correlation is a coincidence.Thus there can be no conclusion made regarding the existence or the direction of a cause-and-effect relationship only from the fact that A and B are correlated. Determining whether there is an actual cause-and-effect relationship requires further investigation, even when the relationship between A and B is statistically significant, a large effect size is observed, or a large part of the variance is explained."}, {"text": "of n statistical units, a linear regression model assumes that the relationship between the dependent variable y and the p-vector of regressors x is linear. This relationship is modeled through a disturbance term or error variable \u03b5 \u2014 an unobserved random variable that adds \"noise\" to the linear relationship between the dependent variable and regressors. Thus the model takes the form"}, {"text": "of n statistical units, a linear regression model assumes that the relationship between the dependent variable y and the p-vector of regressors x is linear. This relationship is modeled through a disturbance term or error variable \u03b5 \u2014 an unobserved random variable that adds \"noise\" to the linear relationship between the dependent variable and regressors. Thus the model takes the form"}]}, {"question": "Is predictive modeling machine learning", "positive_ctxs": [{"text": "It's more of an approach than a process. Predictive analytics and machine learning go hand-in-hand, as predictive models typically include a machine learning algorithm. These models can be trained over time to respond to new data or values, delivering the results the business needs."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Notoriety: Despite their age, LCS algorithms are still not widely known even in machine learning communities. As a result, LCS algorithms are rarely considered in comparison to other established machine learning approaches. This is likely due to the following factors: (1) LCS is a relatively complicated algorithmic approach, (2) LCS, rule-based modeling is a different paradigm of modeling than almost all other machine learning approaches."}, {"text": "Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalizable predictive patterns. According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field.Leo Breiman distinguished two statistical modeling paradigms: data model and algorithmic model, wherein \"algorithmic model\" means more or less the machine learning algorithms like Random forest."}, {"text": "Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalizable predictive patterns. According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field.Leo Breiman distinguished two statistical modeling paradigms: data model and algorithmic model, wherein \"algorithmic model\" means more or less the machine learning algorithms like Random forest."}, {"text": "Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalizable predictive patterns. According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field.Leo Breiman distinguished two statistical modeling paradigms: data model and algorithmic model, wherein \"algorithmic model\" means more or less the machine learning algorithms like Random forest."}, {"text": "Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalizable predictive patterns. According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field.Leo Breiman distinguished two statistical modeling paradigms: data model and algorithmic model, wherein \"algorithmic model\" means more or less the machine learning algorithms like Random forest."}, {"text": "Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalizable predictive patterns. According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field.Leo Breiman distinguished two statistical modeling paradigms: data model and algorithmic model, wherein \"algorithmic model\" means more or less the machine learning algorithms like Random forest."}, {"text": "Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalizable predictive patterns. According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field.Leo Breiman distinguished two statistical modeling paradigms: data model and algorithmic model, wherein \"algorithmic model\" means more or less the machine learning algorithms like Random forest."}]}, {"question": "How do you understand a concept deeply", "positive_ctxs": [{"text": "8 Powerful Tricks That Make You Grasp New Concepts Faster1) Use mental associations. Colours, acronyms and word associations can be especially useful tools to help you hold on to thoughts, patterns and concepts. 2) Apply the 80/20 principle. 3) Break it down. 4) Write it down. 5) Connect existing knowledge. 6) Try Brain exercises. 7) Learn your way. 8) Teach other people."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "Why cant we use linear regression for dimension reduction", "positive_ctxs": [{"text": "Linear regression is used to find the best fitting line between all the points of your dataset (by computing the minimum of a given distance), it does not, in itself, reduce the dimensionality of your data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "There are several key assumptions that underlie the use of ANCOVA and affect interpretation of the results. The standard linear regression assumptions hold; further we assume that the slope of the covariate is equal across all treatment groups (homogeneity of regression slopes)."}, {"text": "There are several key assumptions that underlie the use of ANCOVA and affect interpretation of the results. The standard linear regression assumptions hold; further we assume that the slope of the covariate is equal across all treatment groups (homogeneity of regression slopes)."}, {"text": "A linear subspace of dimension 2 is a vector plane. A linear subspace that contains all elements but one of a basis of the ambient space is a vector hyperplane. In a vector space of finite dimension n, a vector hyperplane is thus a subspace of dimension n \u2013 1."}, {"text": "Hence, the outcome is either pi or 1 \u2212 pi, as in the previous line.Linear predictor functionThe basic idea of logistic regression is to use the mechanism already developed for linear regression by modeling the probability pi using a linear predictor function, i.e. a linear combination of the explanatory variables and a set of regression coefficients that are specific to the model at hand but the same for all trials."}, {"text": "Hence, the outcome is either pi or 1 \u2212 pi, as in the previous line.Linear predictor functionThe basic idea of logistic regression is to use the mechanism already developed for linear regression by modeling the probability pi using a linear predictor function, i.e. a linear combination of the explanatory variables and a set of regression coefficients that are specific to the model at hand but the same for all trials."}, {"text": "Hence, the outcome is either pi or 1 \u2212 pi, as in the previous line.Linear predictor functionThe basic idea of logistic regression is to use the mechanism already developed for linear regression by modeling the probability pi using a linear predictor function, i.e. a linear combination of the explanatory variables and a set of regression coefficients that are specific to the model at hand but the same for all trials."}, {"text": "of the predictors) is equivalent to the exponential function of the linear regression expression. This illustrates how the logit serves as a link function between the probability and the linear regression expression. Given that the logit ranges between negative and positive infinity, it provides an adequate criterion upon which to conduct linear regression and the logit is easily converted back into the odds.So we define odds of the dependent variable equaling a case (given some linear combination"}]}, {"question": "What does convergence in distribution mean", "positive_ctxs": [{"text": "Convergence in distribution is in some sense the weakest type of convergence. All it says is that the CDF of Xn's converges to the CDF of X as n goes to infinity. It does not require any dependence between the Xn's and X. We saw this type of convergence before when we discussed the central limit theorem."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "When Xn converges in r-th mean to X for r = 2, we say that Xn converges in mean square (or in quadratic mean) to X.Convergence in the r-th mean, for r \u2265 1, implies convergence in probability (by Markov's inequality). Furthermore, if r > s \u2265 1, convergence in r-th mean implies convergence in s-th mean. Hence, convergence in mean square implies convergence in mean."}, {"text": "In general, convergence in distribution does not imply that the sequence of corresponding probability density functions will also converge. As an example one may consider random variables with densities fn(x) = (1 \u2212 cos(2\u03c0nx))1(0,1). These random variables converge in distribution to a uniform U(0, 1), whereas their densities do not converge at all.However, according to Scheff\u00e9\u2019s theorem, convergence of the probability density functions implies convergence in distribution."}, {"text": "Usually, convergence in distribution does not imply convergence almost surely. However, for a given sequence {Xn} which converges in distribution to X0 it is always possible to find a new probability space (\u03a9, F, P) and random variables {Yn, n = 0, 1, ...} defined on it such that Yn is equal in distribution to Xn for each n \u2265 0, and Yn converges to Y0 almost surely."}, {"text": "Convergence in distribution is the weakest form of convergence typically discussed, since it is implied by all other types of convergence mentioned in this article. However, convergence in distribution is very frequently used in practice; most often it arises from application of the central limit theorem."}, {"text": "This expression asserts the pointwise convergence of the empirical distribution function to the true cumulative distribution function. There is a stronger result, called the Glivenko\u2013Cantelli theorem, which states that the convergence in fact happens uniformly over t:"}, {"text": "For random vectors {X1, X2, ...} \u2282 Rk the convergence in distribution is defined similarly. We say that this sequence converges in distribution to a random k-vector X if"}, {"text": "Almost sure convergence implies convergence in probability (by Fatou's lemma), and hence implies convergence in distribution. It is the notion of convergence used in the strong law of large numbers."}]}, {"question": "Is Markov model machine learning", "positive_ctxs": [{"text": "Hidden Markov models have been around for a pretty long time (1970s at least). It's a misnomer to call them machine learning algorithms. It is most useful, IMO, for state sequence estimation, which is not a machine learning problem since it is for a dynamical process, not a static classification task."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This type of graphical model is known as a directed graphical model, Bayesian network, or belief network. Classic machine learning models like hidden Markov models, neural networks and newer models such as variable-order Markov models can be considered special cases of Bayesian networks."}, {"text": "In addition, machine learning has been applied to systems biology problems such as identifying transcription factor binding sites using a technique known as Markov chain optimization. Genetic algorithms, machine learning techniques which are based on the natural process of evolution, have been used to model genetic networks and regulatory structures.Other systems biology applications of machine learning include the task of enzyme function prediction, high throughput microarray data analysis, analysis of genome-wide association studies to better understand markers of disease, protein function prediction."}, {"text": "A Markov random field, also known as a Markov network, is a model over an undirected graph. A graphical model with many repeated subunits can be represented with plate notation."}, {"text": "In statistics, a maximum-entropy Markov model (MEMM), or conditional Markov model (CMM), is a graphical model for sequence labeling that combines features of hidden Markov models (HMMs) and maximum entropy (MaxEnt) models. An MEMM is a discriminative model that extends a standard maximum entropy classifier by assuming that the unknown values to be learnt are connected in a Markov chain rather than being conditionally independent of each other. MEMMs find applications in natural language processing, specifically in part-of-speech tagging and information extraction."}, {"text": "The same kind of machine learning model can require different constraints, weights or learning rates to generalize different data patterns. These measures are called hyperparameters, and have to be tuned so that the model can optimally solve the machine learning problem. Hyperparameter optimization finds a tuple of hyperparameters that yields an optimal model which minimizes a predefined loss function on given independent data."}, {"text": "The same kind of machine learning model can require different constraints, weights or learning rates to generalize different data patterns. These measures are called hyperparameters, and have to be tuned so that the model can optimally solve the machine learning problem. Hyperparameter optimization finds a tuple of hyperparameters that yields an optimal model which minimizes a predefined loss function on given independent data."}, {"text": "The same kind of machine learning model can require different constraints, weights or learning rates to generalize different data patterns. These measures are called hyperparameters, and have to be tuned so that the model can optimally solve the machine learning problem. Hyperparameter optimization finds a tuple of hyperparameters that yields an optimal model which minimizes a predefined loss function on given independent data."}]}, {"question": "What does the unsharp mask filter do", "positive_ctxs": [{"text": "The Unsharp Mask filter adjusts the contrast of the edge detail and creates the illusion of a more focused image."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This process requires a mask set, which can be extremely expensive. A mask set can cost over a million US dollars. (The smaller the transistors required for the chip, the more expensive the mask will be.)"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Aliasing occurs when adjacent copies of X(f) overlap. The purpose of the anti-aliasing filter is to ensure that the reduced periodicity does not create overlap. The condition that ensures the copies of X(f) do not overlap each other is:"}, {"text": "From the foregoing, we can know that the nonlinear filters have quite different behavior compared to linear filters. The most important characteristic is that, for nonlinear filters, the filter output or response of the filter does not obey the principles outlined earlier, particularly scaling and shift invariance. Furthermore, a nonlinear filter can produce results that vary in a non-intuitive manner."}, {"text": "A basic method of template matching uses an image patch (template), tailored to a specific feature of the search image, which we want to detect. This technique can be easily performed on grey images or edge images. The cross correlation output will be highest at places where the image structure matches the mask structure, where large image values get multiplied by large mask values."}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}]}, {"question": "How do you find the joint distribution of two random variables", "positive_ctxs": [{"text": "The joint behavior of two random variables X and Y is determined by the. joint cumulative distribution function (cdf):(1.1) FXY (x, y) = P(X \u2264 x, Y \u2264 y),where X and Y are continuous or discrete. For example, the probability. P(x1 \u2264 X \u2264 x2,y1 \u2264 Y \u2264 y2) = F(x2,y2) \u2212 F(x2,y1) \u2212 F(x1,y2) + F(x1,y1)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The odds ratio can also be defined in terms of the joint probability distribution of two binary random variables. The joint distribution of binary random variables X and Y can be written"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The joint probability distribution can be expressed either in terms of a joint cumulative distribution function or in terms of a joint probability density function (in the case of continuous variables) or joint probability mass function (in the case of discrete variables). These in turn can be used to find two other types of distributions: the marginal distribution giving the probabilities for any one of the variables with no reference to any specific ranges of values for the other variables, and the conditional probability distribution giving the probabilities for any subset of the variables conditional on particular values of the remaining variables."}, {"text": "is a technical device used to guarantee the existence of random variables, sometimes to construct them, and to define notions such as correlation and dependence or independence based on a joint distribution of two or more random variables on the same probability space. In practice, one often disposes of the space"}, {"text": "is a technical device used to guarantee the existence of random variables, sometimes to construct them, and to define notions such as correlation and dependence or independence based on a joint distribution of two or more random variables on the same probability space. In practice, one often disposes of the space"}, {"text": "When dealing simultaneously with more than one random variable the joint cumulative distribution function can also be defined. For example, for a pair of random variables"}, {"text": "The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy channel in the limit of long block lengths, when the channel statistics are determined by the joint distribution."}]}, {"question": "How do you use Spearman's rank correlation coefficient", "positive_ctxs": [{"text": "Spearman Rank Correlation: Worked Example (No Tied Ranks)The formula for the Spearman rank correlation coefficient when there are no tied ranks is: Step 1: Find the ranks for each individual subject. Step 2: Add a third column, d, to your data. Step 5: Insert the values into the formula.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, Spearman's rank correlation coefficient is useful to measure the statistical dependence between the rankings of athletes in two tournaments. And the Kendall rank correlation coefficient is another approach."}, {"text": "The second approach to approximating the Spearman's rank correlation coefficient from streaming data involves the use of Hermite series based estimators. These estimators, based on Hermite polynomials,"}, {"text": "Rank correlation coefficients, such as Spearman's rank correlation coefficient and Kendall's rank correlation coefficient (\u03c4) measure the extent to which, as one variable increases, the other variable tends to increase, without requiring that increase to be represented by a linear relationship. If, as the one variable increases, the other decreases, the rank correlation coefficients will be negative. It is common to regard these rank correlation coefficients as alternatives to Pearson's coefficient, used either to reduce the amount of calculation or to make the coefficient less sensitive to non-normality in distributions."}, {"text": "Rank correlation coefficients, such as Spearman's rank correlation coefficient and Kendall's rank correlation coefficient (\u03c4) measure the extent to which, as one variable increases, the other variable tends to increase, without requiring that increase to be represented by a linear relationship. If, as the one variable increases, the other decreases, the rank correlation coefficients will be negative. It is common to regard these rank correlation coefficients as alternatives to Pearson's coefficient, used either to reduce the amount of calculation or to make the coefficient less sensitive to non-normality in distributions."}, {"text": "certain advantages over the count matrix approach in this setting. The first advantage is improved accuracy when applied to large numbers of observations. The second advantage is that the Spearman's rank correlation coefficient can be"}, {"text": "This means that we have a perfect rank correlation, and both Spearman's and Kendall's correlation coefficients are 1, whereas in this example Pearson product-moment correlation coefficient is 0.7544, indicating that the points are far from lying on a straight line. In the same way if"}, {"text": "This means that we have a perfect rank correlation, and both Spearman's and Kendall's correlation coefficients are 1, whereas in this example Pearson product-moment correlation coefficient is 0.7544, indicating that the points are far from lying on a straight line. In the same way if"}]}, {"question": "What do you mean by knowledge representation", "positive_ctxs": [{"text": "Knowledge-representation is a field of artificial intelligence that focuses on designing computer representations that capture information about the world that can be used to solve complex problems. Virtually all knowledge representation languages have a reasoning or inference engine as part of the system."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What is the underlying framework used to represent knowledge? Semantic networks were one of the first knowledge representation primitives. Also, data structures and algorithms for general fast search."}, {"text": "What is more there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.A simple example of modus ponens often used in introductory logic books is \"If you are human then you are mortal\". This can be represented in pseudocode as:"}, {"text": "Knowledge representation goes hand in hand with automated reasoning because one of the main purposes of explicitly representing knowledge is to be able to reason about that knowledge, to make inferences, assert new knowledge, etc. Virtually all knowledge representation languages have a reasoning or inference engine as part of the system.A key trade-off in the design of a knowledge representation formalism is that between expressivity and practicality. The ultimate knowledge representation formalism in terms of expressive power and compactness is First Order Logic (FOL)."}, {"text": "Cyc established its own Frame language and had large numbers of analysts document various areas of common sense reasoning in that language. The knowledge recorded in Cyc included common sense models of time, causality, physics, intentions, and many others.The starting point for knowledge representation is the knowledge representation hypothesis first formalized by Brian C. Smith in 1985:"}, {"text": "If I know that you have $12 now, then it would be expected that with even odds, you will either have $11 or $13 after the next toss. This guess is not improved by the added knowledge that you started with $10, then went up to $11, down to $10, up to $11, and then to $12. The fact that the guess is not improved by the knowledge of earlier tosses showcases the Markov property, the memoryless property of a stochastic process."}, {"text": "You are allowed to select k of these n boxes all at once and break them open simultaneously, gaining access to k keys. What is the probability that using these keys you can open all n boxes, where you use a found key to open the box it belongs to and repeat."}]}, {"question": "What is feature extraction in image processing", "positive_ctxs": [{"text": "Feature extraction describes the relevant shape information contained in a pattern so that the task of classifying the pattern is made easy by a formal procedure. In pattern recognition and in image processing, feature extraction is a special form of dimensionality reduction."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once features have been detected, a local image patch around the feature can be extracted. This extraction may involve quite considerable amounts of image processing. The result is known as a feature descriptor or feature vector."}, {"text": "The goals vary from noise removal to feature abstraction. Filtering image data is a standard process used in almost all image processing systems. Nonlinear filters are the most utilized forms of filter construction."}, {"text": "Originally the Kuwahara filter was proposed for use in processing RI-angiocardiographic images of the cardiovascular system. The fact that any edges are preserved when smoothing makes it especially useful for feature extraction and segmentation and explains why it is used in medical imaging."}, {"text": "In computer vision and image processing feature detection includes methods for computing abstractions of image information and making local decisions at every image point whether there is an image feature of a given type at that point or not. The resulting features will be subsets of the image domain, often in the form of isolated points, continuous curves or connected regions."}, {"text": "What kind of graph is used depends on the application. For example, in natural language processing, linear chain CRFs are popular, which implement sequential dependencies in the predictions. In image processing the graph typically connects locations to nearby and/or similar locations to enforce that they receive similar predictions."}, {"text": "Feature detection is a low-level image processing operation. That is, it is usually performed as the first operation on an image, and examines every pixel to see if there is a feature present at that pixel. If this is part of a larger algorithm, then the algorithm will typically only examine the image in the region of the features."}, {"text": "In machine learning, pattern recognition, and image processing, feature extraction starts from an initial set of measured data and builds derived values (features) intended to be informative and non-redundant, facilitating the subsequent learning and generalization steps, and in some cases leading to better human interpretations. Feature extraction is related to dimensionality reduction.When the input data to an algorithm is too large to be processed and it is suspected to be redundant (e.g. the same measurement in both feet and meters, or the repetitiveness of images presented as pixels), then it can be transformed into a reduced set of features (also named a feature vector)."}]}, {"question": "What is the purpose of a goodness of fit test", "positive_ctxs": [{"text": "The goodness of fit test is a statistical hypothesis test to see how well sample data fit a distribution from a population with a normal distribution. Put differently, this test shows if your sample data represents the data you would expect to find in the actual population or if it is somehow skewed."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "R2 is a statistic that will give some information about the goodness of fit of a model. In regression, the R2 coefficient of determination is a statistical measure of how well the regression predictions approximate the real data points. An R2 of 1 indicates that the regression predictions perfectly fit the data."}, {"text": "The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measures can be used in statistical hypothesis testing, e.g."}, {"text": "The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measures can be used in statistical hypothesis testing, e.g."}, {"text": "Given a set of candidate models for the data, the preferred model is the one with the minimum AIC value. Thus, AIC rewards goodness of fit (as assessed by the likelihood function), but it also includes a penalty that is an increasing function of the number of estimated parameters. The penalty discourages overfitting, which is desired because increasing the number of parameters in the model almost always improves the goodness of the fit."}, {"text": "The theory of minimum-distance estimation is related to that for the asymptotic distribution of the corresponding statistical goodness of fit tests. Often the cases of the Cram\u00e9r\u2013von Mises criterion, the Kolmogorov\u2013Smirnov test and the Anderson\u2013Darling test are treated simultaneously by treating them as special cases of a more general formulation of a distance measure. Examples of the theoretical results that are available are: consistency of the parameter estimates; the asymptotic covariance matrices of the parameter estimates."}, {"text": "In 1900, Pearson published a paper on the \u03c72 test which is considered to be one of the foundations of modern statistics. In this paper, Pearson investigated a test of goodness of fit."}]}, {"question": "How do you test if a difference is statistically significant", "positive_ctxs": [{"text": "If your p-value is less than or equal to the set significance level, the data is considered statistically significant. As a general rule, the significance level (or alpha) is commonly set to 0.05, meaning that the probability of observing the differences seen in your data by chance is just 5%."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The power of the test is the probability that the test will find a statistically significant difference between men and women, as a function of the size of the true difference between those two populations."}, {"text": "This shortcoming is especially concerning given that even a small error in blinding can produce a statistically significant result in the absence of any real difference between test groups when a study is sufficiently powered (i.e. statistical significance is not robust to bias). As such, many statistically significant results in randomized controlled trials may be caused by error in blinding."}, {"text": "This shortcoming is especially concerning given that even a small error in blinding can produce a statistically significant result in the absence of any real difference between test groups when a study is sufficiently powered (i.e. statistical significance is not robust to bias). As such, many statistically significant results in randomized controlled trials may be caused by error in blinding."}, {"text": "\"You cannot legitimately test a hypothesis on the same data that first suggested that hypothesis. Once you have a hypothesis, design a study to search specifically for the effect you now think is there. If the result of this test is statistically significant, you have real evidence at last.\""}, {"text": "A chi-squared test, also written as \u03c72 test, is a statistical hypothesis test that is valid to perform when the test statistic is chi-squared distributed under the null hypothesis, specifically Pearson's chi-squared test and variants thereof. Pearson's chi-squared test is used to determine whether there is a statistically significant difference between the expected frequencies and the observed frequencies in one or more categories of a contingency table."}, {"text": "A chi-squared test, also written as \u03c72 test, is a statistical hypothesis test that is valid to perform when the test statistic is chi-squared distributed under the null hypothesis, specifically Pearson's chi-squared test and variants thereof. Pearson's chi-squared test is used to determine whether there is a statistically significant difference between the expected frequencies and the observed frequencies in one or more categories of a contingency table."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}]}, {"question": "How is Machine Learning changing the world", "positive_ctxs": [{"text": "Machine learning is changing the world by transforming all segments including healthcare services, education, transport, food, entertainment, and different assembly line and many more. It will impact lives in almost every aspect, including housing, cars, shopping, food ordering, etc."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect."}, {"text": "How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect."}, {"text": "How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect."}, {"text": "How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect."}, {"text": "On March 1, 2018, Google released its Machine Learning Crash Course (MLCC). Originally designed to help equip Google employees with practical artificial intelligence and machine learning fundamentals, Google rolled out its free TensorFlow workshops in several cities around the world before finally releasing the course to the public."}, {"text": "On March 1, 2018, Google released its Machine Learning Crash Course (MLCC). Originally designed to help equip Google employees with practical artificial intelligence and machine learning fundamentals, Google rolled out its free TensorFlow workshops in several cities around the world before finally releasing the course to the public."}, {"text": "The following tree was constructed using JBoost on the spambase dataset (available from the UCI Machine Learning Repository). In this example, spam is coded as 1 and regular email is coded as \u22121."}]}, {"question": "What should I choose simple linear regression or multiple linear regression", "positive_ctxs": [{"text": "In simple linear regression a single independent variable is used to predict the value of a dependent variable. In multiple linear regression two or more independent variables are used to predict the value of a dependent variable. The difference between the two is the number of independent variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}]}, {"question": "What is prior and posterior", "positive_ctxs": [{"text": "Prior probability represents what is originally believed before new evidence is introduced, and posterior probability takes this new information into account. A posterior probability can subsequently become a prior for a new updated posterior probability as new information arises and is incorporated into the analysis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In Bayesian probability theory, if the posterior distributions p(\u03b8 | x) are in the same probability distribution family as the prior probability distribution p(\u03b8), the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function p(x | \u03b8). For example, the Gaussian family is conjugate to itself (or self-conjugate) with respect to a Gaussian likelihood function: if the likelihood function is Gaussian, choosing a Gaussian prior over the mean will ensure that the posterior distribution is also Gaussian. This means that the Gaussian distribution is a conjugate prior for the likelihood that is also Gaussian."}, {"text": "So, as remarked by Silvey, for large n, the variance is small and hence the posterior distribution is highly concentrated, whereas the assumed prior distribution was very diffuse. This is in accord with what one would hope for, as vague prior knowledge is transformed (through Bayes theorem) into a more precise posterior knowledge by an informative experiment. For small n the Haldane Beta(0,0) prior results in the largest posterior variance while the Bayes Beta(1,1) prior results in the more concentrated posterior."}, {"text": "Both types of predictive distributions have the form of a compound probability distribution (as does the marginal likelihood). In fact, if the prior distribution is a conjugate prior, and hence the prior and posterior distributions come from the same family, it can easily be seen that both prior and posterior predictive distributions also come from the same family of compound distributions. The only difference is that the posterior predictive distribution uses the updated values of the hyperparameters (applying the Bayesian update rules given in the conjugate prior article), while the prior predictive distribution uses the values of the hyperparameters that appear in the prior distribution."}, {"text": "Another example of the same phenomena is the case when the prior estimate and a measurement are normally distributed. If the prior is centered at B with deviation \u03a3, and the measurement is centered at b with deviation \u03c3, then the posterior is centered at"}, {"text": "is typically well-defined and finite. Recall that, for a proper prior, the Bayes estimator minimizes the posterior expected loss. When the prior is improper, an estimator which minimizes the posterior expected loss is referred to as a generalized Bayes estimator."}, {"text": "(This intuition is ignoring the effect of the prior distribution. Furthermore, the posterior is a distribution over distributions. The posterior distribution in general describes the parameter in question, and in this case the parameter itself is a discrete probability distribution, i.e."}, {"text": "appears both in the numerator and the denominator of the posterior probability, and it does not depend on the integration variable x, hence it cancels out, and it is irrelevant to the final result. Similarly the normalizing factor for the prior probability, the beta function B(\u03b1Prior,\u03b2Prior) cancels out and it is immaterial to the final result. The same posterior probability result can be obtained if one uses an un-normalized prior"}]}, {"question": "What is the moment generating function of binomial distribution", "positive_ctxs": [{"text": "The Moment Generating Function of the Binomial Distribution (3) dMx(t) dt = n(q + pet)n\u22121pet = npet(q + pet)n\u22121. Evaluating this at t = 0 gives (4) E(x) = np(q + p)n\u22121 = np."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "However, the log-normal distribution is not determined by its moments. This implies that it cannot have a defined moment generating function in a neighborhood of zero."}, {"text": "th moment of the function given in the brackets. This identity follows by the convolution theorem for moment generating function and applying the chain-rule for differentiating a product."}, {"text": "The distribution of N thus is the binomial distribution with parameters n and p, where p = 1/2. The mean of the binomial distribution is n/2, and the variance is n/4. This distribution function will be denoted by N(d)."}, {"text": "the characteristic function is the moment-generating function of iX or the moment generating function of X evaluated on the imaginary axis. This function can also be viewed as the Fourier transform of the probability density function, which can therefore be deduced from it by inverse Fourier transform."}, {"text": "If f is a probability density function, then the value of the integral above is called the n-th moment of the probability distribution. More generally, if F is a cumulative probability distribution function of any probability distribution, which may not have a density function, then the n-th moment of the probability distribution is given by the Riemann\u2013Stieltjes integral"}, {"text": "If the function is a probability distribution, then the zeroth moment is the total probability (i.e. one), the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. The mathematical concept is closely related to the concept of moment in physics."}, {"text": "In mathematics, the moments of a function are quantitative measures related to the shape of the function's graph. The concept is used in both mechanics and statistics. If the function represents mass, then the zeroth moment is the total mass, the first moment divided by the total mass is the center of mass, and the second moment is the rotational inertia."}]}, {"question": "How do you find the variance of a ratio", "positive_ctxs": [{"text": "It is usually defined as the ratio of the variance to the mean. As a formula, that's: D = \u03c32 / \u03bc."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The efficiency of the sample median, measured as the ratio of the variance of the mean to the variance of the median, depends on the sample size and on the underlying population distribution. For a sample of size"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}]}, {"question": "What is Expectation Maximization for missing data", "positive_ctxs": [{"text": "Expectation maximization is applicable whenever the data are missing completely at random or missing at random-but unsuitable when the data are not missing at random."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "That any one Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure in the same Chance and Expectation at a fair Lay. If I expect a or b, and have an equal chance of gaining them, my Expectation is worth (a+b)/2."}, {"text": "That any one Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure in the same Chance and Expectation at a fair Lay. If I expect a or b, and have an equal chance of gaining them, my Expectation is worth (a+b)/2."}, {"text": "In statistics, imputation is the process of replacing missing data with substituted values. When substituting for a data point, it is known as \"unit imputation\"; when substituting for a component of a data point, it is known as \"item imputation\". There are three main problems that missing data causes: missing data can introduce a substantial amount of bias, make the handling and analysis of the data more arduous, and create reductions in efficiency."}, {"text": "In statistics, imputation is the process of replacing missing data with substituted values. When substituting for a data point, it is known as \"unit imputation\"; when substituting for a component of a data point, it is known as \"item imputation\". There are three main problems that missing data causes: missing data can introduce a substantial amount of bias, make the handling and analysis of the data more arduous, and create reductions in efficiency."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Because missing data can create problems for analyzing data, imputation is seen as a way to avoid pitfalls involved with listwise deletion of cases that have missing values. That is to say, when one or more values are missing for a case, most statistical packages default to discarding any case that has a missing value, which may introduce bias or affect the representativeness of the results. Imputation preserves all cases by replacing missing data with an estimated value based on other available information."}, {"text": "Because missing data can create problems for analyzing data, imputation is seen as a way to avoid pitfalls involved with listwise deletion of cases that have missing values. That is to say, when one or more values are missing for a case, most statistical packages default to discarding any case that has a missing value, which may introduce bias or affect the representativeness of the results. Imputation preserves all cases by replacing missing data with an estimated value based on other available information."}]}, {"question": "Can a biased estimator be consistent", "positive_ctxs": [{"text": "Biased but consistent , it approaches the correct value, and so it is consistent. ), these are both negatively biased but consistent estimators."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as in unbiased estimation of standard deviation); because an estimator is median-unbiased but not mean-unbiased (or the reverse); because a biased estimator gives a lower value of some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful."}, {"text": "All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as in unbiased estimation of standard deviation); because an estimator is median-unbiased but not mean-unbiased (or the reverse); because a biased estimator gives a lower value of some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful."}, {"text": "Efficiency in statistics is important because they allow one to compare the performance of various estimators. Although an unbiased estimator is usually favored over a biased one, a more efficient biased estimator can sometimes be more valuable than a less efficient unbiased estimator. For example, this can occur when the values of the biased estimator gathers around a number closer to the true value."}, {"text": "Efficiency in statistics is important because they allow one to compare the performance of various estimators. Although an unbiased estimator is usually favored over a biased one, a more efficient biased estimator can sometimes be more valuable than a less efficient unbiased estimator. For example, this can occur when the values of the biased estimator gathers around a number closer to the true value."}, {"text": "Efficiency in statistics is important because they allow one to compare the performance of various estimators. Although an unbiased estimator is usually favored over a biased one, a more efficient biased estimator can sometimes be more valuable than a less efficient unbiased estimator. For example, this can occur when the values of the biased estimator gathers around a number closer to the true value."}, {"text": "The Bayes estimator is asymptotically efficient and as the sample size approaches infinity (n \u2192 \u221e), it approaches the MLE solution. The Bayes estimator is biased (how much depends on the priors), admissible and consistent in probability."}, {"text": "The Bayes estimator is asymptotically efficient and as the sample size approaches infinity (n \u2192 \u221e), it approaches the MLE solution. The Bayes estimator is biased (how much depends on the priors), admissible and consistent in probability."}]}, {"question": "What is the importance of a sampling approach to the estimation of expected values in Monte Carlo algorithms", "positive_ctxs": [{"text": "Importance sampling is a variance reduction technique that can be used in the Monte Carlo method. The idea behind importance sampling is that certain values of the input random variables in a simulation have more impact on the parameter being estimated than others."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Importance sampling is a variance reduction technique that can be used in the Monte Carlo method. The idea behind importance sampling is that certain values of the input random variables in a simulation have more impact on the parameter being estimated than others. If these \"important\" values are emphasized by sampling more frequently, then the estimator variance can be reduced."}, {"text": "The fundamental issue in implementing importance sampling simulation is the choice of the biased distribution which encourages the important regions of the input variables. Choosing or designing a good biased distribution is the \"art\" of importance sampling. The rewards for a good distribution can be huge run-time savings; the penalty for a bad distribution can be longer run times than for a general Monte Carlo simulation without importance sampling."}, {"text": "In computing, a Monte Carlo algorithm is a randomized algorithm whose output may be incorrect with a certain (typically small) probability. Two examples of such algorithms are Karger\u2013Stein algorithm and Monte Carlo algorithm for minimum Feedback arc set.The name refers to the grand casino in the Principality of Monaco at Monte Carlo, which is well-known around the world as an icon of gambling. The term \"Monte Carlo\" was first introduced in 1947 by Nicholas Metropolis.Las Vegas algorithms are the subset of Monte Carlo algorithms that always produce the correct answer."}, {"text": "(See also the Bayes factor article. )In the former purpose (that of approximating a posterior probability), variational Bayes is an alternative to Monte Carlo sampling methods \u2014 particularly, Markov chain Monte Carlo methods such as Gibbs sampling \u2014 for taking a fully Bayesian approach to statistical inference over complex distributions that are difficult to evaluate directly or sample. In particular, whereas Monte Carlo techniques provide a numerical approximation to the exact posterior using a set of samples, Variational Bayes provides a locally-optimal, exact analytical solution to an approximation of the posterior."}, {"text": "Another good example of the LLN is the Monte Carlo method. These methods are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The larger the number of repetitions, the better the approximation tends to be."}, {"text": "Quantum Monte Carlo encompasses a large family of computational methods whose common aim is the study of complex quantum systems. One of the major goals of these approaches is to provide a reliable solution (or an accurate approximation) of the quantum many-body problem. The diverse flavor of quantum Monte Carlo approaches all share the common use of the Monte Carlo method to handle the multi-dimensional integrals that arise in the different formulations of the many-body problem."}, {"text": "Las Vegas algorithms were introduced by L\u00e1szl\u00f3 Babai in 1979, in the context of the graph isomorphism problem, as a dual to Monte Carlo algorithms. Babai introduced the term \"Las Vegas algorithm\" alongside an example involving coin flips: the algorithm depends on a series of independent coin flips, and there is a small chance of failure (no result). However, in contrast to Monte Carlo algorithms, the Las Vegas algorithm can guarantee the correctness of any reported result."}]}, {"question": "How do you do principal component analysis in R", "positive_ctxs": [{"text": "To perform principal component analysis using the correlation matrix using the prcomp() function, set the scale argument to TRUE . Plot the first two PCs of the correlation matrix using the autoplot() function."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Sparse principal component analysis (sparse PCA) is a specialised technique used in statistical analysis and, in particular, in the analysis of multivariate data sets. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by introducing sparsity structures to the input variables."}, {"text": "Multilinear principal component analysis (MPCA) is a multilinear extension of principal component analysis (PCA). MPCA is employed in the analysis of n-way arrays, i.e. a cube or hyper-cube of numbers, also informally referred to as a \"data tensor\"."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "Many data analysis software packages provide for feature extraction and dimension reduction. Common numerical programming environments such as MATLAB, SciLab, NumPy, Sklearn and the R language provide some of the simpler feature extraction techniques (e.g. principal component analysis) via built-in commands."}, {"text": "Traditional techniques like principal component analysis do not consider the intrinsic geometry of the data. Laplacian eigenmaps builds a graph from neighborhood information of the data set. Each data point serves as a node on the graph and connectivity between nodes is governed by the proximity of neighboring points (using e.g."}]}, {"question": "How does logistic regression algorithm work", "positive_ctxs": [{"text": "Logistic regression is a supervised learning classification algorithm used to predict the probability of a target variable. The nature of target or dependent variable is dichotomous, which means there would be only two possible classes. Mathematically, a logistic regression model predicts P(Y=1) as a function of X."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis."}, {"text": "Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis."}, {"text": "Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis."}, {"text": "In a binary logistic regression model, the dependent variable has two levels (categorical). Outputs with more than two values are modeled by multinomial logistic regression and, if the multiple categories are ordered, by ordinal logistic regression (for example the proportional odds ordinal logistic model). The logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier."}, {"text": "In a binary logistic regression model, the dependent variable has two levels (categorical). Outputs with more than two values are modeled by multinomial logistic regression and, if the multiple categories are ordered, by ordinal logistic regression (for example the proportional odds ordinal logistic model). The logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier."}]}, {"question": "What is central limit theorem in probability", "positive_ctxs": [{"text": "In the study of probability theory, the central limit theorem (CLT) states that the distribution of sample approximates a normal distribution (also known as a \u201cbell curve\u201d) as the sample size becomes larger, assuming that all samples are identical in size, and regardless of the population distribution shape."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "As a direct generalization, one can consider random walks on crystal lattices (infinite-fold abelian covering graphs over finite graphs). Actually it is possible to establish the central limit theorem and large deviation theorem in this setting."}, {"text": "The drawback is that the central limit theorem is applicable when the sample size is sufficiently large. Therefore, it is less and less applicable with the sample involved in modern inference instances. The fault is not in the sample size on its own part."}, {"text": "The convergence of a random walk toward the Wiener process is controlled by the central limit theorem, and by Donsker's theorem. For a particle in a known fixed position at t = 0, the central limit theorem tells us that after a large number of independent steps in the random walk, the walker's position is distributed according to a normal distribution of total variance:"}, {"text": "Here, the central limit theorem states that the distribution of the sample mean \"for very large samples\" is approximately normally distributed, if the distribution is not heavy tailed."}, {"text": "Here, the central limit theorem states that the distribution of the sample mean \"for very large samples\" is approximately normally distributed, if the distribution is not heavy tailed."}, {"text": "It is necessary to make assumptions about the nature of the experimental errors to statistically test the results. A common assumption is that the errors belong to a normal distribution. The central limit theorem supports the idea that this is a good approximation in many cases."}, {"text": "It is necessary to make assumptions about the nature of the experimental errors to statistically test the results. A common assumption is that the errors belong to a normal distribution. The central limit theorem supports the idea that this is a good approximation in many cases."}]}, {"question": "What is learning algorithm in machine learning", "positive_ctxs": [{"text": "Machine learning algorithms are the engines of machine learning, meaning it is the algorithms that turn a data set into a model. Which kind of algorithm works best (supervised, unsupervised, classification, regression, etc.)"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "What is the opposite of risk aversion", "positive_ctxs": [{"text": "Risk tolerance"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": ".Unlike ARA whose units are in $\u22121, RRA is a dimension-less quantity, which allows it to be applied universally. Like for absolute risk aversion, the corresponding terms constant relative risk aversion (CRRA) and decreasing/increasing relative risk aversion (DRRA/IRRA) are used. This measure has the advantage that it is still a valid measure of risk aversion, even if the utility function changes from risk averse to risk loving as c varies, i.e."}, {"text": "They are measured as the n-th root of the n-th central moment. The symbol used for risk aversion is A or An."}, {"text": "Hyperbolic absolute risk aversion (HARA) is the most general class of utility functions that are usually used in practice (specifically, CRRA (constant relative risk aversion, see below), CARA (constant absolute risk aversion), and quadratic utility all exhibit HARA and are often used because of their mathematical tractability). A utility function exhibits HARA if its absolute risk aversion is a hyperbolic function, namely"}, {"text": "In one model in monetary economics, an increase in relative risk aversion increases the impact of households' money holdings on the overall economy. In other words, the more the relative risk aversion increases, the more money demand shocks will impact the economy."}, {"text": "The equally distributed welfare equivalent income associated with an Atkinson Index with an inequality aversion parameter of 1.0 is simply the geometric mean of incomes. For values other than one, the equivalent value is an Lp norm divided by the number of elements, with p equal to one minus the inequality aversion parameter."}, {"text": "The equally distributed welfare equivalent income associated with an Atkinson Index with an inequality aversion parameter of 1.0 is simply the geometric mean of incomes. For values other than one, the equivalent value is an Lp norm divided by the number of elements, with p equal to one minus the inequality aversion parameter."}, {"text": "In modern portfolio theory, risk aversion is measured as the additional expected reward an investor requires to accept additional risk. If an investor is risk-averse, they will invest in multiple uncertain assets, but only when the predicted return on a portfolio that is uncertain is greater than the predicted return on one that is not uncertain will the investor will prefer the former. Here, the risk-return spectrum is relevant, as it results largely from this type of risk aversion."}]}, {"question": "How do you fix autocorrelation", "positive_ctxs": [{"text": "There are basically two methods to reduce autocorrelation, of which the first one is most important:Improve model fit. Try to capture structure in the data in the model. If no more predictors can be added, include an AR1 model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influence a child's cognitive functioning?). Then you can still do research, but it is not causal, it is correlational."}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}]}, {"question": "Is linear regression Bayesian", "positive_ctxs": [{"text": "In statistics, Bayesian linear regression is an approach to linear regression in which the statistical analysis is undertaken within the context of Bayesian inference."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Bayesian linear regression applies the framework of Bayesian statistics to linear regression. (See also Bayesian multivariate linear regression.) In particular, the regression coefficients \u03b2 are assumed to be random variables with a specified prior distribution."}, {"text": "Bayesian linear regression applies the framework of Bayesian statistics to linear regression. (See also Bayesian multivariate linear regression.) In particular, the regression coefficients \u03b2 are assumed to be random variables with a specified prior distribution."}, {"text": "Bayesian linear regression applies the framework of Bayesian statistics to linear regression. (See also Bayesian multivariate linear regression.) In particular, the regression coefficients \u03b2 are assumed to be random variables with a specified prior distribution."}, {"text": "Bayesian linear regression applies the framework of Bayesian statistics to linear regression. (See also Bayesian multivariate linear regression.) In particular, the regression coefficients \u03b2 are assumed to be random variables with a specified prior distribution."}, {"text": "Bayesian linear regression applies the framework of Bayesian statistics to linear regression. (See also Bayesian multivariate linear regression.) In particular, the regression coefficients \u03b2 are assumed to be random variables with a specified prior distribution."}, {"text": "Bayesian linear regression applies the framework of Bayesian statistics to linear regression. (See also Bayesian multivariate linear regression.) In particular, the regression coefficients \u03b2 are assumed to be random variables with a specified prior distribution."}, {"text": "In statistics, Bayesian linear regression is an approach to linear regression in which the statistical analysis is undertaken within the context of Bayesian inference. When the regression model has errors that have a normal distribution, and if a particular form of prior distribution is assumed, explicit results are available for the posterior probability distributions of the model's parameters."}]}, {"question": "How do you do a regression analysis with multiple variables", "positive_ctxs": [{"text": "0:3910:15Suggested clip \u00b7 118 secondsConducting a Multiple Regression using Microsoft Excel Data YouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}]}, {"question": "What are the benefits of social media analytics", "positive_ctxs": [{"text": "Creative Ways to Benefit From Social Media AnalyticsEngage Better With Your Audience. Many businesses have a hard time keeping up with the vast amount of social media activity that impacts their brand. Improve Customer Relations. Monitor Your Competition. Identify and Engage With Your Top Customers. Find Out Where Your Industry is Heading."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Much of the software that is currently used for learning analytics duplicates functionality of web analytics software, but applies it to learner interactions with content. Social network analysis tools are commonly used to map social connections and discussions. Some examples of learning analytics software tools include:"}, {"text": "Recent research has indicated that social media is becoming a stronger part of younger individuals' media culture, as more intimate stories are being told via social media and are being intertwined with gender, sexuality, and relationships.Teens are avid internet and social media users in the United States. Research has found that almost all U.S. teens (95%) aged 12 through 17 are online, compared to only 78% of adults. Of these teens, 80% have profiles on social media sites, as compared to only 64% of the online population aged 30 and older."}, {"text": "Even still, 72% of US Adults had at least one social media account in 2019, and 65% of Americans believe that social media is an effective way to reach out to politicians. Some of the main concerns with social media lie with the spread of deliberately false or misinterpreted information and the spread of hate and extremism. Social scientist experts explain the growth of misinformation and hate as a result of the increase in echo chambers.Fueled by confirmation bias, online echo chambers allow users to be steeped within their own ideology."}, {"text": "Social media today is a popular medium for the candidates to campaign and for gauging the public reaction to the campaigns. Social media can also be used as an indicator of the voter opinion regarding the poll. Some research studies have shown that predictions made using social media signals can match traditional opinion polls.Regarding the 2016 U.S. presidential election, a major concern has been that of the effect of false stories spread throughout social media."}, {"text": "Because social media is tailored to your interests and your selected friends, it is an easy outlet for political echo chambers. Another Pew Research poll in 2019 showed that 28% of US adults \"often\" find their news through social media, and 55% of US adults get their news from social media either \"often\" or \"sometimes\". Additionally, more people are reported as going to social media for their news as the Coronavirus has restricted politicians to online campaigns and social media live streams."}, {"text": "Bias has been a feature of the mass media since its birth with the invention of the printing press. The expense of early printing equipment restricted media production to a limited number of people. Historians have found that publishers often served the interests of powerful social groups."}, {"text": "The increased use of social media as a data source for researchers has led to new uncertainties regarding the definition of human subject research. Privacy, confidentiality, and informed consent are key concerns, yet it is unclear when social media users qualify as human subjects. conclude that if access to the social media content is public, information is identifiable but not private, and information gathering requires no interaction with the person who posted it online, then the research is unlikely to qualify as human subjects research."}]}, {"question": "What is the z score for 50 confidence interval", "positive_ctxs": [{"text": "Area in TailsConfidence LevelArea between 0 and z-scorez-score50%0.25000.67480%0.40001.28290%0.45001.64595%0.47501.9602 more rows"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "One way to motivate pseudocounts, particularly for binomial data, is via a formula for the midpoint of an interval estimate, particularly a binomial proportion confidence interval. The best-known is due to Edwin Bidwell Wilson, in Wilson (1927): the midpoint of the Wilson score interval corresponding to"}, {"text": "One way to motivate pseudocounts, particularly for binomial data, is via a formula for the midpoint of an interval estimate, particularly a binomial proportion confidence interval. The best-known is due to Edwin Bidwell Wilson, in Wilson (1927): the midpoint of the Wilson score interval corresponding to"}]}, {"question": "What are the advantages of using a naive Bayes classifier as opposed to other methods", "positive_ctxs": [{"text": "Advantages of Naive Bayes ClassifierIt is simple and easy to implement.It doesn't require as much training data.It handles both continuous and discrete data.It is highly scalable with the number of predictors and data points.It is fast and can be used to make real-time predictions.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In natural language processing, multinomial LR classifiers are commonly used as an alternative to naive Bayes classifiers because they do not assume statistical independence of the random variables (commonly known as features) that serve as predictors. However, learning in such a model is slower than for a naive Bayes classifier, and thus may not be appropriate given a very large number of classes to learn. In particular, learning in a Naive Bayes classifier is a simple matter of counting up the number of co-occurrences of features and classes, while in a maximum entropy classifier the weights, which are typically maximized using maximum a posteriori (MAP) estimation, must be learned using an iterative procedure; see #Estimating the coefficients."}, {"text": "In natural language processing, multinomial LR classifiers are commonly used as an alternative to naive Bayes classifiers because they do not assume statistical independence of the random variables (commonly known as features) that serve as predictors. However, learning in such a model is slower than for a naive Bayes classifier, and thus may not be appropriate given a very large number of classes to learn. In particular, learning in a Naive Bayes classifier is a simple matter of counting up the number of co-occurrences of features and classes, while in a maximum entropy classifier the weights, which are typically maximized using maximum a posteriori (MAP) estimation, must be learned using an iterative procedure; see #Estimating the coefficients."}, {"text": "This event model is especially popular for classifying short texts. It has the benefit of explicitly modelling the absence of terms. Note that a naive Bayes classifier with a Bernoulli event model is not the same as a multinomial NB classifier with frequency counts truncated to one."}, {"text": "This event model is especially popular for classifying short texts. It has the benefit of explicitly modelling the absence of terms. Note that a naive Bayes classifier with a Bernoulli event model is not the same as a multinomial NB classifier with frequency counts truncated to one."}, {"text": "This event model is especially popular for classifying short texts. It has the benefit of explicitly modelling the absence of terms. Note that a naive Bayes classifier with a Bernoulli event model is not the same as a multinomial NB classifier with frequency counts truncated to one."}, {"text": "This event model is especially popular for classifying short texts. It has the benefit of explicitly modelling the absence of terms. Note that a naive Bayes classifier with a Bernoulli event model is not the same as a multinomial NB classifier with frequency counts truncated to one."}, {"text": "While naive Bayes often fails to produce a good estimate for the correct class probabilities, this may not be a requirement for many applications. For example, the naive Bayes classifier will make the correct MAP decision rule classification so long as the correct class is more probable than any other class. This is true regardless of whether the probability estimate is slightly, or even grossly inaccurate."}]}, {"question": "How does naive Bayes work in text classification", "positive_ctxs": [{"text": "Since a Naive Bayes text classifier is based on the Bayes's Theorem, which helps us compute the conditional probabilities of occurrence of two events based on the probabilities of occurrence of each individual event, encoding those probabilities is extremely useful."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Despite their naive design and apparently oversimplified assumptions, naive Bayes classifiers have worked quite well in many complex real-world situations. In 2004, an analysis of the Bayesian classification problem showed that there are sound theoretical reasons for the apparently implausible efficacy of naive Bayes classifiers. Still, a comprehensive comparison with other classification algorithms in 2006 showed that Bayes classification is outperformed by other approaches, such as boosted trees or random forests.An advantage of naive Bayes is that it only requires a small number of training data to estimate the parameters necessary for classification."}, {"text": "Despite their naive design and apparently oversimplified assumptions, naive Bayes classifiers have worked quite well in many complex real-world situations. In 2004, an analysis of the Bayesian classification problem showed that there are sound theoretical reasons for the apparently implausible efficacy of naive Bayes classifiers. Still, a comprehensive comparison with other classification algorithms in 2006 showed that Bayes classification is outperformed by other approaches, such as boosted trees or random forests.An advantage of naive Bayes is that it only requires a small number of training data to estimate the parameters necessary for classification."}, {"text": "Despite their naive design and apparently oversimplified assumptions, naive Bayes classifiers have worked quite well in many complex real-world situations. In 2004, an analysis of the Bayesian classification problem showed that there are sound theoretical reasons for the apparently implausible efficacy of naive Bayes classifiers. Still, a comprehensive comparison with other classification algorithms in 2006 showed that Bayes classification is outperformed by other approaches, such as boosted trees or random forests.An advantage of naive Bayes is that it only requires a small number of training data to estimate the parameters necessary for classification."}, {"text": "Despite their naive design and apparently oversimplified assumptions, naive Bayes classifiers have worked quite well in many complex real-world situations. In 2004, an analysis of the Bayesian classification problem showed that there are sound theoretical reasons for the apparently implausible efficacy of naive Bayes classifiers. Still, a comprehensive comparison with other classification algorithms in 2006 showed that Bayes classification is outperformed by other approaches, such as boosted trees or random forests.An advantage of naive Bayes is that it only requires a small number of training data to estimate the parameters necessary for classification."}, {"text": "For some types of probability models, naive Bayes classifiers can be trained very efficiently in a supervised learning setting. In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the naive Bayes model without accepting Bayesian probability or using any Bayesian methods."}, {"text": "For some types of probability models, naive Bayes classifiers can be trained very efficiently in a supervised learning setting. In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the naive Bayes model without accepting Bayesian probability or using any Bayesian methods."}, {"text": "For some types of probability models, naive Bayes classifiers can be trained very efficiently in a supervised learning setting. In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the naive Bayes model without accepting Bayesian probability or using any Bayesian methods."}]}, {"question": "What does the Taguchi loss function indicate", "positive_ctxs": [{"text": "The Taguchi loss function is graphical depiction of loss developed by the Japanese business statistician Genichi Taguchi to describe a phenomenon affecting the value of products produced by a company. This means that if the product dimension goes out of the tolerance limit the quality of the product drops suddenly."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The Taguchi loss function is graphical depiction of loss developed by the Japanese business statistician Genichi Taguchi to describe a phenomenon affecting the value of products produced by a company. Praised by Dr. W. Edwards Deming (the business guru of the 1980s American quality movement), it made clear the concept that quality does not suddenly plummet when, for instance, a machinist exceeds a rigid blueprint tolerance. Instead 'loss' in value progressively increases as variation increases from the intended condition."}, {"text": "Through his concept of the quality loss function, Taguchi explained that from the customer's point of view this drop of quality is not sudden. The customer experiences a loss of quality the moment product specification deviates from the 'target value'. This 'loss' is depicted by a quality loss function and it follows a parabolic curve mathematically given by L = k(y\u2013m)2, where m is the theoretical 'target value' or 'mean value' and y is the actual size of the product, k is a constant and L is the loss."}, {"text": "Consequently, the hinge loss function cannot be used with gradient descent methods or stochastic gradient descent methods which rely on differentiability over the entire domain. However, the hinge loss does have a subgradient at"}, {"text": "that minimizes the average value of the loss function on the training set, i.e., minimizes the empirical risk. It does so by starting with a model, consisting of a constant function"}, {"text": "that minimizes the average value of the loss function on the training set, i.e., minimizes the empirical risk. It does so by starting with a model, consisting of a constant function"}, {"text": "that minimizes the average value of the loss function on the training set, i.e., minimizes the empirical risk. It does so by starting with a model, consisting of a constant function"}, {"text": "The most common loss function for regression is the square loss function (also known as the L2-norm). This familiar loss function is used in Ordinary Least Squares regression."}]}, {"question": "How does the Adam Optimizer work", "positive_ctxs": [{"text": "Adam can be looked at as a combination of RMSprop and Stochastic Gradient Descent with momentum. It uses the squared gradients to scale the learning rate like RMSprop and it takes advantage of momentum by using moving average of the gradient instead of gradient itself like SGD with momentum."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "How much does the ball cost?\" many subjects incorrectly answer $0.10. An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do."}, {"text": "A group of 20 students spends between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability of the student passing the exam?"}, {"text": "A group of 20 students spends between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability of the student passing the exam?"}, {"text": "A group of 20 students spends between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability of the student passing the exam?"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}]}, {"question": "How do I save and restore model in Tensorflow", "positive_ctxs": [{"text": "In (and after) TensorFlow version 0.11. 0RC1, you can save and restore your model directly by calling tf. train. export_meta_graph and tf."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Syntactic or structural ambiguities are frequently found in humor and advertising. One of the most enduring jokes from the famous comedian Groucho Marx was his quip that used a modifier attachment ambiguity: \"I shot an elephant in my pajamas. How he got into my pajamas I don't know.\""}, {"text": "I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs."}, {"text": "I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs."}, {"text": "How do neurons migrate to the proper position in the central and peripheral systems? We know from molecular biology that distinct parts of the nervous system release distinct chemical cues, from growth factors to hormones that modulate and influence the growth and development of functional connections between neurons."}, {"text": "On the other hand, I have often screamed at cadets for bad execution, and in general they do better the next time. So please don't tell us that reinforcement works and punishment does not, because the opposite is the case.\" This was a joyous moment, in which I understood an important truth about the world: because we tend to reward others when they do well and punish them when they do badly, and because there is regression to the mean, it is part of the human condition that we are statistically punished for rewarding others and rewarded for punishing them."}, {"text": "I assert that this is not so ... The essential distinction between the frequentists and the non-frequentists is, I think, that the former, in an effort to avoid anything savouring of matters of opinion, seek to define probability in terms of the objective properties of a population, real or hypothetical, whereas the latter do not."}]}, {"question": "What is universal Turing machine in TOC", "positive_ctxs": [{"text": "In computer science, a universal Turing machine (UTM) is a Turing machine that simulates an arbitrary Turing machine on arbitrary input. In terms of computational complexity, a multi-tape universal Turing machine need only be slower by logarithmic factor compared to the machines it simulates."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In computer science, a universal Turing machine (UTM) is a Turing machine that simulates an arbitrary Turing machine on arbitrary input. The universal machine essentially achieves this by reading both the description of the machine to be simulated as well as the input to that machine from its own tape. Alan Turing introduced the idea of such a machine in 1936\u20131937."}, {"text": "A universal Turing machine can calculate any recursive function, decide any recursive language, and accept any recursively enumerable language. According to the Church\u2013Turing thesis, the problems solvable by a universal Turing machine are exactly those problems solvable by an algorithm or an effective method of computation, for any reasonable definition of those terms. For these reasons, a universal Turing machine serves as a standard against which to compare computational systems, and a system that can simulate a universal Turing machine is called Turing complete."}, {"text": "An abstract version of the universal Turing machine is the universal function, a computable function which can be used to calculate any other computable function. The UTM theorem proves the existence of such a function."}, {"text": "When Alan Turing came up with the idea of a universal machine he had in mind the simplest computing model powerful enough to calculate all possible functions that can be calculated. Claude Shannon first explicitly posed the question of finding the smallest possible universal Turing machine in 1956. He showed that two symbols were sufficient so long as enough states were used (or vice versa), and that it was always possible to exchange states for symbols."}, {"text": "Thus in this example the machine acts like a 3-colour Turing machine with internal states A and B (represented by no letter). The case for a 2-headed Turing machine is very similar. Thus a 2-headed Turing machine can be Universal with 6 colours."}, {"text": "However, generalizing the standard Turing machine model admits even smaller UTMs. One such generalization is to allow an infinitely repeated word on one or both sides of the Turing machine input, thus extending the definition of universality and known as \"semi-weak\" or \"weak\" universality, respectively. Small weakly universal Turing machines that simulate the Rule 110 cellular automaton have been given for the (6, 2), (3, 3), and (2, 4) state-symbol pairs."}, {"text": "Kolmogorov randomness defines a string (usually of bits) as being random if and only if any computer program that can produce that string is at least as long as the string itself. To make this precise, a universal computer (or universal Turing machine) must be specified, so that \"program\" means a program for this universal machine. A random string in this sense is \"incompressible\" in that it is impossible to \"compress\" the string into a program that is shorter than the string itself."}]}, {"question": "What is CNN feature vector", "positive_ctxs": [{"text": "A feature vector is just a vector that contains information describing an object's important characteristics. In image processing, features can take many forms. A simple feature representation of an image is the raw intensity value of each pixel. However, more complicated feature representations are also possible."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "maps each possible input/output pair to a finite-dimensional real-valued feature vector. As before, the feature vector is multiplied by a weight vector"}, {"text": "maps each possible input/output pair to a finite-dimensional real-valued feature vector. As before, the feature vector is multiplied by a weight vector"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "One of the ways of achieving binary classification is using a linear predictor function (related to the perceptron) with a feature vector as input. The method consists of calculating the scalar product between the feature vector and a vector of weights,"}, {"text": "One of the ways of achieving binary classification is using a linear predictor function (related to the perceptron) with a feature vector as input. The method consists of calculating the scalar product between the feature vector and a vector of weights,"}, {"text": "At each iteration, take the feature vector (X) belonging to one random instance, and the feature vectors of the instance closest to X (by Euclidean distance) from each class. The closest same-class instance is called 'near-hit', and the closest different-class instance is called 'near-miss'. Update the weight vector such that"}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}]}, {"question": "What is weighted kappa", "positive_ctxs": [{"text": "The weighted kappa is calculated using a predefined table of weights which measure the degree of disagreement between the two raters, the higher the disagreement the higher the weight."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The weighted kappa allows disagreements to be weighted differently and is especially useful when codes are ordered. Three matrices are involved, the matrix of observed scores, the matrix of expected scores based on chance agreement, and the weight matrix. Weight matrix cells located on the diagonal (upper-left to bottom-right) represent agreement and thus contain zeros."}, {"text": "The weighted kappa allows disagreements to be weighted differently and is especially useful when codes are ordered. Three matrices are involved, the matrix of observed scores, the matrix of expected scores based on chance agreement, and the weight matrix. Weight matrix cells located on the diagonal (upper-left to bottom-right) represent agreement and thus contain zeros."}, {"text": "The weighted kappa allows disagreements to be weighted differently and is especially useful when codes are ordered. Three matrices are involved, the matrix of observed scores, the matrix of expected scores based on chance agreement, and the weight matrix. Weight matrix cells located on the diagonal (upper-left to bottom-right) represent agreement and thus contain zeros."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Cohen's kappa coefficient (\u03ba) is a statistic that is used to measure inter-rater reliability (and also Intra-rater reliability) for qualitative (categorical) items. It is generally thought to be a more robust measure than simple percent agreement calculation, as \u03ba takes into account the possibility of the agreement occurring by chance. There is controversy surrounding Cohen's kappa due to the difficulty in interpreting indices of agreement."}, {"text": "Cohen's kappa coefficient (\u03ba) is a statistic that is used to measure inter-rater reliability (and also Intra-rater reliability) for qualitative (categorical) items. It is generally thought to be a more robust measure than simple percent agreement calculation, as \u03ba takes into account the possibility of the agreement occurring by chance. There is controversy surrounding Cohen's kappa due to the difficulty in interpreting indices of agreement."}]}, {"question": "What is splitting criterion in data mining", "positive_ctxs": [{"text": "Decision Tree node splitting is an important step, the core issue is how to choose the splitting attribute. 5, the splitting criteria is calculating information gain of each attribute, then the attribute with the maximum information gain or information gain ratio is selected as splitting attribute."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Data wrangling is a superset of data mining and requires processes that some data mining uses, but not always. The process of data mining is to find patterns within large data sets, where data wrangling transforms data in order to deliver insights about that data. Even though data wrangling is a superset of data mining does not mean that data mining does not use it, there are many use cases for data wrangling in data mining."}, {"text": "Not all patterns found by data mining algorithms are necessarily valid. It is common for data mining algorithms to find patterns in the training set which are not present in the general data set. To overcome this, the evaluation uses a test set of data on which the data mining algorithm was not trained."}, {"text": "A tree is built by splitting the source set, constituting the root node of the tree, into subsets\u2014which constitute the successor children. The splitting is based on a set of splitting rules based on classification features. This process is repeated on each derived subset in a recursive manner called recursive partitioning."}, {"text": "A tree is built by splitting the source set, constituting the root node of the tree, into subsets\u2014which constitute the successor children. The splitting is based on a set of splitting rules based on classification features. This process is repeated on each derived subset in a recursive manner called recursive partitioning."}, {"text": "While the term \"data mining\" itself may have no ethical implications, it is often associated with the mining of information in relation to peoples' behavior (ethical and otherwise).The ways in which data mining can be used can in some cases and contexts raise questions regarding privacy, legality, and ethics. In particular, data mining government or commercial data sets for national security or law enforcement purposes, such as in the Total Information Awareness Program or in ADVISE, has raised privacy concerns.Data mining requires data preparation which uncovers information or patterns which compromise confidentiality and privacy obligations. A common way for this to occur is through data aggregation."}, {"text": "Data preprocessing is an important step in the data mining process. The phrase \"garbage in, garbage out\" is particularly applicable to data mining and machine learning projects. Data-gathering methods are often loosely controlled, resulting in out-of-range values (e.g., Income: \u2212100), impossible data combinations (e.g., Sex: Male, Pregnant: Yes), and missing values, etc."}, {"text": "If the test data and criterion data are collected at the same time, this is referred to as concurrent validity evidence. If the test data are collected first in order to predict criterion data collected at a later point in time, then this is referred to as predictive validity evidence."}]}, {"question": "What are the assumptions of multinomial logistic regression", "positive_ctxs": [{"text": "Multinomial logistic regression does have assumptions, such as the assumption of independence among the dependent variable choices. This assumption states that the choice of or membership in one category is not related to the choice or membership of another category (i.e., the dependent variable)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "There are multiple equivalent ways to describe the mathematical model underlying multinomial logistic regression. This can make it difficult to compare different treatments of the subject in different texts. The article on logistic regression presents a number of equivalent formulations of simple logistic regression, and many of these have analogues in the multinomial logit model."}, {"text": "There are multiple equivalent ways to describe the mathematical model underlying multinomial logistic regression. This can make it difficult to compare different treatments of the subject in different texts. The article on logistic regression presents a number of equivalent formulations of simple logistic regression, and many of these have analogues in the multinomial logit model."}, {"text": "It is also possible to formulate multinomial logistic regression as a latent variable model, following the two-way latent variable model described for binary logistic regression. This formulation is common in the theory of discrete choice models, and makes it easier to compare multinomial logistic regression to the related multinomial probit model, as well as to extend it to more complex models."}, {"text": "It is also possible to formulate multinomial logistic regression as a latent variable model, following the two-way latent variable model described for binary logistic regression. This formulation is common in the theory of discrete choice models, and makes it easier to compare multinomial logistic regression to the related multinomial probit model, as well as to extend it to more complex models."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Logistic regression and other log-linear models are also commonly used in machine learning. A generalisation of the logistic function to multiple inputs is the softmax activation function, used in multinomial logistic regression."}]}, {"question": "What does the confidence interval tell you", "positive_ctxs": [{"text": "he confidence interval tells you more than just the possible range around the estimate. It also tells you about how stable the estimate is. A stable estimate is one that would be close to the same value if the survey were repeated."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "This is demonstrated by showing that zero is outside of the specified confidence interval of the measurement on either side, typically within the real numbers. Failure to exclude the null hypothesis (with any confidence) does logically NOT confirm or support the (unprovable) null hypothesis. (When you have not proven something is e.g."}, {"text": "Often they are expressed as 95% confidence intervals. Formally, a 95% confidence interval for a value is a range where, if the sampling and analysis were repeated under the same conditions (yielding a different dataset), the interval would include the true (population) value in 95% of all possible cases. This does not imply that the probability that the true value is in the confidence interval is 95%."}, {"text": "Often they are expressed as 95% confidence intervals. Formally, a 95% confidence interval for a value is a range where, if the sampling and analysis were repeated under the same conditions (yielding a different dataset), the interval would include the true (population) value in 95% of all possible cases. This does not imply that the probability that the true value is in the confidence interval is 95%."}, {"text": "Often they are expressed as 95% confidence intervals. Formally, a 95% confidence interval for a value is a range where, if the sampling and analysis were repeated under the same conditions (yielding a different dataset), the interval would include the true (population) value in 95% of all possible cases. This does not imply that the probability that the true value is in the confidence interval is 95%."}, {"text": "Often they are expressed as 95% confidence intervals. Formally, a 95% confidence interval for a value is a range where, if the sampling and analysis were repeated under the same conditions (yielding a different dataset), the interval would include the true (population) value in 95% of all possible cases. This does not imply that the probability that the true value is in the confidence interval is 95%."}, {"text": "Often they are expressed as 95% confidence intervals. Formally, a 95% confidence interval for a value is a range where, if the sampling and analysis were repeated under the same conditions (yielding a different dataset), the interval would include the true (population) value in 95% of all possible cases. This does not imply that the probability that the true value is in the confidence interval is 95%."}]}, {"question": "What is the disadvantage of cluster sampling", "positive_ctxs": [{"text": "Assuming the sample size is constant across sampling methods, cluster sampling generally provides less precision than either simple random sampling or stratified sampling. This is the main disadvantage of cluster sampling."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "A common motivation of cluster sampling is to reduce costs by increasing sampling efficiency. This contrasts with stratified sampling where the motivation is to increase precision."}, {"text": "A common motivation of cluster sampling is to reduce costs by increasing sampling efficiency. This contrasts with stratified sampling where the motivation is to increase precision."}, {"text": "A random sampling technique is then used on any relevant clusters to choose which clusters to include in the study. In single-stage cluster sampling, all the elements from each of the selected clusters are sampled. In two-stage cluster sampling, a random sampling technique is applied to the elements from each of the selected clusters."}, {"text": "A random sampling technique is then used on any relevant clusters to choose which clusters to include in the study. In single-stage cluster sampling, all the elements from each of the selected clusters are sampled. In two-stage cluster sampling, a random sampling technique is applied to the elements from each of the selected clusters."}, {"text": "Mathematically, the variance of the sampling distribution obtained is equal to the variance of the population divided by the sample size. This is because as the sample size increases, sample means cluster more closely around the population mean."}]}, {"question": "What is a target in machine learning", "positive_ctxs": [{"text": "The target variable of a dataset is the feature of a dataset about which you want to gain a deeper understanding. A supervised machine learning algorithm uses historical data to learn patterns and uncover relationships between other features of your dataset and the target."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Domain adaptation is a field associated with machine learning and transfer learning. This scenario arises when we aim at learning from a source data distribution a well performing model on a different (but related) target data distribution. For instance, one of the tasks of the common spam filtering problem consists in adapting a model from one user (the source distribution) to a new user who receives significantly different emails (the target distribution)."}, {"text": "Decision tree learning is a method commonly used in data mining. The goal is to create a model that predicts the value of a target variable based on several input variables."}, {"text": "Decision tree learning is a method commonly used in data mining. The goal is to create a model that predicts the value of a target variable based on several input variables."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "What is the difference between bias and selection", "positive_ctxs": [{"text": "Bias is stated as a penchant that prevents objective consideration of an issue or situation; basically the formation of opinion beforehand without any examination. Selection is stated as the act of choosing or selecting a preference; resulting in a carefully chosen and representative choice."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration."}, {"text": "the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration."}, {"text": "The critical difference between AIC and BIC (and their variants) is the asymptotic property under well-specified and misspecified model classes. Their fundamental differences have been well-studied in regression variable selection and autoregression order selection problems. In general, if the goal is prediction, AIC and leave-one-out cross-validations are preferred."}, {"text": "In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, \"bias\" is an objective property of an estimator."}, {"text": "In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, \"bias\" is an objective property of an estimator."}, {"text": "Media bias is the bias or perceived bias of journalists and news producers within the mass media in the selection of events, the stories that are reported, and how they are covered. The term generally implies a pervasive or widespread bias violating the standards of journalism, rather than the perspective of an individual journalist or article. The level of media bias in different nations is debated."}, {"text": "The idea is to propagate a population of feasible candidate solutions using mutation and selection mechanisms. The mean field interaction between the individuals is encapsulated in the selection and the cross-over mechanisms."}]}, {"question": "How do you validate a model performance", "positive_ctxs": [{"text": "Using proper validation techniques helps you understand your model, but most importantly, estimate an unbiased generalization performance.Splitting your data. k-Fold Cross-Validation (k-Fold CV) Leave-one-out Cross-Validation (LOOCV) Nested Cross-Validation. Time Series CV. Comparing Models."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Example: On a 1-5 scale where 1 means disagree completely and 5 means agree completely, how much do you agree with the following statement. \"The Federal government should do more to help people facing foreclosure on their homes. \"A multinomial discrete-choice model can examine the responses to these questions (model G, model H, model I)."}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "What are the disadvantages of a box plot", "positive_ctxs": [{"text": "One drawback of boxplots is that they tend to emphasize the tails of a distribution, which are the least certain points in the data set. They also hide many of the details of the distribution."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Graphs that are appropriate for bivariate analysis depend on the type of variable. For two continuous variables, a scatterplot is a common graph. When one variable is categorical and the other continuous, a box plot is common and when both are categorical a mosaic plot is common."}, {"text": "Graphs that are appropriate for bivariate analysis depend on the type of variable. For two continuous variables, a scatterplot is a common graph. When one variable is categorical and the other continuous, a box plot is common and when both are categorical a mosaic plot is common."}, {"text": "Since the mathematician John W. Tukey popularized this type of visual data display in 1969, several variations on the traditional box plot have been described. Two of the most common are variable width box plots and notched box plots (see Figure 4)."}, {"text": "Variable width box plots illustrate the size of each group whose data is being plotted by making the width of the box proportional to the size of the group. A popular convention is to make the box width proportional to the square root of the size of the group.Notched box plots apply a \"notch\" or narrowing of the box around the median. Notches are useful in offering a rough guide to significance of difference of medians; if the notches of two boxes do not overlap, this offers evidence of a statistically significant difference between the medians."}, {"text": "In descriptive statistics, a box plot or boxplot is a method for graphically depicting groups of numerical data through their quartiles. Box plots may also have lines extending from the boxes (whiskers) indicating variability outside the upper and lower quartiles, hence the terms box-and-whisker plot and box-and-whisker diagram. Outliers may be plotted as individual points."}, {"text": "Complex online box plot creator with example data - see also BoxPlotR: a web tool for generation of box plots Spitzer et al. Nature Methods 11, 121\u2013122 (2014)"}, {"text": "For a boxplot, only the vertical heights correspond to the visualized data set while horizontal width of the box is irrelevant. Outliers located outside the fences in a boxplot can be marked as any choice of symbol, such as an \"x\" or \"o\". The fences are sometimes also referred to as \"whiskers\" while the entire plot visual is called a \"box-and-whisker\" plot."}]}, {"question": "How would you prepare a dataset for deep learning", "positive_ctxs": [{"text": "Preparing Your Dataset for Machine Learning: 8 Basic Techniques That Make Your Data BetterArticulate the problem early.Establish data collection mechanisms.Format data to make it consistent.Reduce data.Complete data cleaning.Decompose data.Rescale data.Discretize data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "To conduct a Bayes linear analysis it is necessary to identify some values that you expect to know shortly by making measurements D and some future value which you would like to know B. Here D refers to a vector containing data and B to a vector containing quantities you would like to predict. For the following example B and D are taken to be two-dimensional vectors i.e."}, {"text": "\"Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday \u2013 and thus be able to change, say, Saturday's model before Saturday arrives."}, {"text": "\"Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday \u2013 and thus be able to change, say, Saturday's model before Saturday arrives."}, {"text": "'If you built this house in a month, you would be able to sell it before the summer. 'In ordinary non-conditional sentences, such adverbials are compatible with perfective aspect but not with imperfective aspect:"}, {"text": "From 1912 to 1934 Gosset and Fisher would exchange more than 150 letters. In 1924, Gosset wrote in a letter to Fisher, \"I am sending you a copy of Student's Tables as you are the only man that's ever likely to use them!\" Fisher believed that Gosset had effected a \"logical revolution\"."}, {"text": "Imagine you have a cluster of news articles on a particular event, and you want to produce one summary. Each article is likely to have many similar sentences, and you would only want to include distinct ideas in the summary. To address this issue, LexRank applies a heuristic post-processing step that builds up a summary by adding sentences in rank order, but discards any sentences that are too similar to ones already placed in the summary."}]}, {"question": "What does rejection sampling mean in Bayesian nets", "positive_ctxs": [{"text": "\u2013 Rejection sampling: reject samples disagreeing with evidence. \u2013 Markov chain Monte Carlo (MCMC): sample from a stochastic process. whose stationary distribution is the true posterior."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For many distributions, finding a proposal distribution that includes the given distribution without a lot of wasted space is difficult. An extension of rejection sampling that can be used to overcome this difficulty and efficiently sample from a wide variety of distributions (provided that they have log-concave density functions, which is in fact the case for most of the common distributions\u2014even those whose density functions are not concave themselves!) is known as adaptive rejection sampling (ARS)."}, {"text": "A rejection of the null hypothesis implies that the correct hypothesis lies in the logical complement of the null hypothesis. But no specific alternatives need to have been specified. The rejection of the null hypothesis does not tell us which of any possible alternatives might be better supported."}, {"text": "Only in 3% of the cases, where the combination of those two falls outside the \"core of the ziggurat\" (a kind of rejection sampling using logarithms), do exponentials and more uniform random numbers have to be employed."}, {"text": "Only in 3% of the cases, where the combination of those two falls outside the \"core of the ziggurat\" (a kind of rejection sampling using logarithms), do exponentials and more uniform random numbers have to be employed."}, {"text": "Only in 3% of the cases, where the combination of those two falls outside the \"core of the ziggurat\" (a kind of rejection sampling using logarithms), do exponentials and more uniform random numbers have to be employed."}, {"text": "Only in 3% of the cases, where the combination of those two falls outside the \"core of the ziggurat\" (a kind of rejection sampling using logarithms), do exponentials and more uniform random numbers have to be employed."}, {"text": "Only in 3% of the cases, where the combination of those two falls outside the \"core of the ziggurat\" (a kind of rejection sampling using logarithms), do exponentials and more uniform random numbers have to be employed."}]}, {"question": "What is an example of a discrete random variable", "positive_ctxs": [{"text": "Every probability pi is a number between 0 and 1, and the sum of all the probabilities is equal to 1. Examples of discrete random variables include: The number of eggs that a hen lays in a given day (it can't be 2.3) The number of people going to a given soccer match."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A mixed random variable is a random variable whose cumulative distribution function is neither piecewise-constant (a discrete random variable) nor everywhere-continuous. It can be realized as the sum of a discrete random variable and a continuous random variable; in which case the CDF will be the weighted average of the CDFs of the component variables.An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, X = \u22121; otherwise X = the value of the spinner as in the preceding example."}, {"text": "A mixed random variable is a random variable whose cumulative distribution function is neither piecewise-constant (a discrete random variable) nor everywhere-continuous. It can be realized as the sum of a discrete random variable and a continuous random variable; in which case the CDF will be the weighted average of the CDFs of the component variables.An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, X = \u22121; otherwise X = the value of the spinner as in the preceding example."}, {"text": "is countable, the random variable is called a discrete random variable and its distribution is a discrete probability distribution, i.e. can be described by a probability mass function that assigns a probability to each value in the image of"}, {"text": "is countable, the random variable is called a discrete random variable and its distribution is a discrete probability distribution, i.e. can be described by a probability mass function that assigns a probability to each value in the image of"}, {"text": "An example of such distributions could be a mix of discrete and continuous distributions\u2014for example, a random variable that is 0 with probability 1/2, and takes a random value from a normal distribution with probability 1/2. It can still be studied to some extent by considering it to have a pdf of"}, {"text": "In its discrete version, a random field is a list of random numbers whose indices are identified with a discrete set of points in a space (for example, n-dimensional Euclidean space). More generally, the values might be defined over a continuous domain, and the random field might be thought of as a \"function valued\" random variable as described above. In quantum field theory the notion is even generalized to a random functional, one that takes on random value over a space of functions (see Feynman integral)."}, {"text": "The mean of a probability distribution is the long-run arithmetic average value of a random variable having that distribution. If the random variable is denoted by"}]}, {"question": "How do you prove covariance stationary", "positive_ctxs": [{"text": "A sequence of random variables is covariance stationary if all the terms of the sequence have the same mean, and if the covariance between any two terms of the sequence depends only on the relative positions of the two terms, that is, on how far apart they are located from each other, and not on their absolute position"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "In mathematics and statistics, a stationary process (or a strict/strictly stationary process or strong/strongly stationary process) is a stochastic process whose unconditional joint probability distribution does not change when shifted in time. Consequently, parameters such as mean and variance also do not change over time."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influence a child's cognitive functioning?). Then you can still do research, but it is not causal, it is correlational."}]}, {"question": "What is auxiliary classifier", "positive_ctxs": [{"text": "Auxiliary Classifiers are type of architectural component that seek to improve the convergence of very deep networks. They are classifier heads we attach to layers before the end of the network."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}]}, {"question": "How do you optimize code", "positive_ctxs": [{"text": "Try to avoid implementing cheap tricks to make your code run faster.Optimize your Code using Appropriate Algorithm. Optimize Your Code for Memory. printf and scanf Vs cout and cin. Using Operators. if Condition Optimization. Problems with Functions. Optimizing Loops. Data Structure Optimization.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": ", which could be close to infinity. Moreover, even when you apply the Rejection sampling method, it is always hard to optimize the bound"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influence a child's cognitive functioning?). Then you can still do research, but it is not causal, it is correlational."}]}, {"question": "What is negative sampling in Word2Vec", "positive_ctxs": [{"text": "Word2Vec slightly customizes the process and calls it negative sampling. In Word2Vec, the words for the negative samples (used for the corrupted pairs) are drawn from a specially designed distribution, which favours less frequent words to be drawn more often."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}, {"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}, {"text": "Cluster sampling is a sampling plan used when mutually homogeneous yet internally heterogeneous groupings are evident in a statistical population. It is often used in marketing research. In this sampling plan, the total population is divided into these groups (known as clusters) and a simple random sample of the groups is selected."}]}, {"question": "What are feature extraction algorithms", "positive_ctxs": [{"text": "Feature extraction is a general term for methods of constructing combinations of the variables to get around these problems while still describing the data with sufficient accuracy. Many machine learning practitioners believe that properly optimized feature extraction is the key to effective model construction."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": ".Techniques to transform the raw feature vectors (feature extraction) are sometimes used prior to application of the pattern-matching algorithm. For example, feature extraction algorithms attempt to reduce a large-dimensionality feature vector into a smaller-dimensionality vector that is easier to work with and encodes less redundancy, using mathematical techniques such as principal components analysis (PCA). The distinction between feature selection and feature extraction is that the resulting features after feature extraction has taken place are of a different sort than the original features and may not easily be interpretable, while the features left after feature selection are simply a subset of the original features."}, {"text": ".Techniques to transform the raw feature vectors (feature extraction) are sometimes used prior to application of the pattern-matching algorithm. For example, feature extraction algorithms attempt to reduce a large-dimensionality feature vector into a smaller-dimensionality vector that is easier to work with and encodes less redundancy, using mathematical techniques such as principal components analysis (PCA). The distinction between feature selection and feature extraction is that the resulting features after feature extraction has taken place are of a different sort than the original features and may not easily be interpretable, while the features left after feature selection are simply a subset of the original features."}, {"text": "In the context of machine learning, mapping methods may be viewed as a preliminary feature extraction step, after which pattern recognition algorithms are applied. Typically those that just give a visualisation are based on proximity data \u2013 that is, distance measurements."}, {"text": "Many data analysis software packages provide for feature extraction and dimension reduction. Common numerical programming environments such as MATLAB, SciLab, NumPy, Sklearn and the R language provide some of the simpler feature extraction techniques (e.g. principal component analysis) via built-in commands."}, {"text": "Once features have been detected, a local image patch around the feature can be extracted. This extraction may involve quite considerable amounts of image processing. The result is known as a feature descriptor or feature vector."}, {"text": "Feature extraction is a general term for methods of constructing combinations of the variables to get around these problems while still describing the data with sufficient accuracy. Many machine learning practitioners believe that properly optimized feature extraction is the key to effective model construction.Results can be improved using constructed sets of application-dependent features, typically built by an expert. One such process is called feature engineering."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}]}, {"question": "What are discrete distributions", "positive_ctxs": [{"text": "A discrete distribution is a statistical distribution that shows the probabilities of discrete (countable) outcomes, such as 1, 2, 3 Overall, the concepts of discrete and continuous probability distributions and the random variables they describe are the underpinnings of probability theory and statistical analysis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If a theorem can be proved in this general setting, it holds for both discrete and continuous distributions as well as others; separate proofs are not required for discrete and continuous distributions."}, {"text": "Most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately. The measure theory-based treatment of probability covers the discrete, continuous, a mix of the two, and more."}, {"text": ".Probability distributions are generally divided into two classes. A discrete probability distribution is applicable to the scenarios where the set of possible outcomes is discrete (e.g. a coin toss, a roll of a dice), and the probabilities are here encoded by a discrete list of the probabilities of the outcomes, known as the probability mass function."}, {"text": ".Probability distributions are generally divided into two classes. A discrete probability distribution is applicable to the scenarios where the set of possible outcomes is discrete (e.g. a coin toss, a roll of a dice), and the probabilities are here encoded by a discrete list of the probabilities of the outcomes, known as the probability mass function."}, {"text": ".Probability distributions are generally divided into two classes. A discrete probability distribution is applicable to the scenarios where the set of possible outcomes is discrete (e.g. a coin toss, a roll of a dice), and the probabilities are here encoded by a discrete list of the probabilities of the outcomes, known as the probability mass function."}, {"text": ".Probability distributions are generally divided into two classes. A discrete probability distribution is applicable to the scenarios where the set of possible outcomes is discrete (e.g. a coin toss, a roll of a dice), and the probabilities are here encoded by a discrete list of the probabilities of the outcomes, known as the probability mass function."}, {"text": "The above can be extended in a simple way to allow consideration of distributions which contain both discrete and continuous components. Suppose that the distribution consists of a number of discrete probability masses"}]}, {"question": "Is AI Artificial Intelligence", "positive_ctxs": [{"text": "Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing (NLP), speech recognition and machine vision."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yudkowsky, E. Artificial Intelligence as a Positive and Negative Factor in Global Risk. In Global Catastrophic Risks, Oxford University Press, 2008.Discusses Artificial Intelligence from the perspective of Existential risk. In particular, Sections 1-4 give background to the definition of Friendly AI in Section 5."}, {"text": "Yudkowsky, E. Artificial Intelligence as a Positive and Negative Factor in Global Risk. In Global Catastrophic Risks, Oxford University Press, 2008.Discusses Artificial Intelligence from the perspective of Existential risk. In particular, Sections 1-4 give background to the definition of Friendly AI in Section 5."}, {"text": "In 1987 a biennial conference, the International Conference on AI and Law (ICAIL), was instituted. This conference began to be seen as the main venue for publishing and the developing ideas within AI and Law, and it led to the foundation of the International Association for Artificial Intelligence and Law (IAAIL), to organize and convene subsequent ICAILs. This, in turn, led to the foundation of the Artificial Intelligence and Law Journal, first published in 1992."}, {"text": "Although Artificial Intelligence and Computational Intelligence seek a similar long-term goal: reach general intelligence, which is the intelligence of a machine that could perform any intellectual task that a human being can; there's a clear difference between them. According to Bezdek (1994), Computational Intelligence is a subset of Artificial Intelligence."}, {"text": "Distributed Artificial Intelligence (DAI) also called Decentralized Artificial Intelligence is a subfield of artificial intelligence research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of multi-agent systems."}, {"text": "Distributed Artificial Intelligence (DAI) also called Decentralized Artificial Intelligence is a subfield of artificial intelligence research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of multi-agent systems."}, {"text": "Distributed Artificial Intelligence (DAI) also called Decentralized Artificial Intelligence is a subfield of artificial intelligence research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of multi-agent systems."}]}, {"question": "How do I tell which loss function is suitable for image classification", "positive_ctxs": [{"text": "2 Answers. If you have two classes (i.e. binary classification), you should use a binary crossentropy loss. If you have more than two you should use a categorical crossentropy loss."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, the Huber loss is a loss function used in robust regression, that is less sensitive to outliers in data than the squared error loss. A variant for classification is also sometimes used."}, {"text": "On the other hand, I have often screamed at cadets for bad execution, and in general they do better the next time. So please don't tell us that reinforcement works and punishment does not, because the opposite is the case.\" This was a joyous moment, in which I understood an important truth about the world: because we tend to reward others when they do well and punish them when they do badly, and because there is regression to the mean, it is part of the human condition that we are statistically punished for rewarding others and rewarded for punishing them."}, {"text": "Given the binary nature of classification, a natural selection for a loss function (assuming equal cost for false positives and false negatives) would be the 0-1 loss function (0\u20131 indicator function), which takes the value of 0 if the predicted classification equals that of the true class or a 1 if the predicted classification does not match the true class. This selection is modeled by"}, {"text": "However, this loss function is non-convex and non-smooth, and solving for the optimal solution is an NP-hard combinatorial optimization problem. As a result, it is better to substitute loss function surrogates which are tractable for commonly used learning algorithms, as they have convenient properties such as being convex and smooth. In addition to their computational tractability, one can show that the solutions to the learning problem using these loss surrogates allow for the recovery of the actual solution to the original classification problem."}, {"text": "The most common loss function for regression is the square loss function (also known as the L2-norm). This familiar loss function is used in Ordinary Least Squares regression."}, {"text": "C is a scalar constant (set by the user of the learning algorithm) that controls the balance between the regularization and the loss function.Popular loss functions include the hinge loss (for linear SVMs) and the log loss (for linear logistic regression). If the regularization function R is convex, then the above is a convex problem. Many algorithms exist for solving such problems; popular ones for linear classification include (stochastic) gradient descent, L-BFGS, coordinate descent and Newton methods."}, {"text": "The square loss function is both convex and smooth. However, the square loss function tends to penalize outliers excessively, leading to slower convergence rates (with regards to sample complexity) than for the logistic loss or hinge loss functions. In addition, functions which yield high values of"}]}, {"question": "Does unlabeled data really help in semi supervised learning", "positive_ctxs": [{"text": "Semi-supervised learning is an approach to machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training. Unlabeled data, when used in conjunction with a small amount of labeled data, can produce considerable improvement in learning accuracy."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Some methods for semi-supervised learning are not intrinsically geared to learning from both unlabeled and labeled data, but instead make use of unlabeled data within a supervised learning framework. For instance, the labeled and unlabeled examples"}, {"text": "Some methods for semi-supervised learning are not intrinsically geared to learning from both unlabeled and labeled data, but instead make use of unlabeled data within a supervised learning framework. For instance, the labeled and unlabeled examples"}, {"text": "Some methods for semi-supervised learning are not intrinsically geared to learning from both unlabeled and labeled data, but instead make use of unlabeled data within a supervised learning framework. For instance, the labeled and unlabeled examples"}, {"text": "Self-training is a wrapper method for semi-supervised learning. First a supervised learning algorithm is trained based on the labeled data only. This classifier is then applied to the unlabeled data to generate more labeled examples as input for the supervised learning algorithm."}, {"text": "Self-training is a wrapper method for semi-supervised learning. First a supervised learning algorithm is trained based on the labeled data only. This classifier is then applied to the unlabeled data to generate more labeled examples as input for the supervised learning algorithm."}, {"text": "Self-training is a wrapper method for semi-supervised learning. First a supervised learning algorithm is trained based on the labeled data only. This classifier is then applied to the unlabeled data to generate more labeled examples as input for the supervised learning algorithm."}, {"text": "Unsupervised feature learning is learning features from unlabeled data. The goal of unsupervised feature learning is often to discover low-dimensional features that capture some structure underlying the high-dimensional input data. When the feature learning is performed in an unsupervised way, it enables a form of semisupervised learning where features learned from an unlabeled dataset are then employed to improve performance in a supervised setting with labeled data."}]}, {"question": "What is ML classification", "positive_ctxs": [{"text": "Classification is a type of supervised learning. It specifies the class to which data elements belong to and is best used when the output has finite and discrete values. It predicts a class for an input variable as well."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The main disagreement is whether all of ML is part of AI, as this would mean that anyone using ML could claim they are using AI. Others have the view that not all of ML is part of AI where only an 'intelligent' subset of ML is part of AI.The question to what is the difference between ML and AI is answered by Judea Pearl in The Book of Why. Accordingly ML learns and predicts based on passive observations, whereas AI implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals."}, {"text": "The main disagreement is whether all of ML is part of AI, as this would mean that anyone using ML could claim they are using AI. Others have the view that not all of ML is part of AI where only an 'intelligent' subset of ML is part of AI.The question to what is the difference between ML and AI is answered by Judea Pearl in The Book of Why. Accordingly ML learns and predicts based on passive observations, whereas AI implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals."}, {"text": "The main disagreement is whether all of ML is part of AI, as this would mean that anyone using ML could claim they are using AI. Others have the view that not all of ML is part of AI where only an 'intelligent' subset of ML is part of AI.The question to what is the difference between ML and AI is answered by Judea Pearl in The Book of Why. Accordingly ML learns and predicts based on passive observations, whereas AI implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals."}, {"text": "The main disagreement is whether all of ML is part of AI, as this would mean that anyone using ML could claim they are using AI. Others have the view that not all of ML is part of AI where only an 'intelligent' subset of ML is part of AI.The question to what is the difference between ML and AI is answered by Judea Pearl in The Book of Why. Accordingly ML learns and predicts based on passive observations, whereas AI implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals."}, {"text": "The main disagreement is whether all of ML is part of AI, as this would mean that anyone using ML could claim they are using AI. Others have the view that not all of ML is part of AI where only an 'intelligent' subset of ML is part of AI.The question to what is the difference between ML and AI is answered by Judea Pearl in The Book of Why. Accordingly ML learns and predicts based on passive observations, whereas AI implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals."}, {"text": "The main disagreement is whether all of ML is part of AI, as this would mean that anyone using ML could claim they are using AI. Others have the view that not all of ML is part of AI where only an 'intelligent' subset of ML is part of AI.The question to what is the difference between ML and AI is answered by Judea Pearl in The Book of Why. Accordingly ML learns and predicts based on passive observations, whereas AI implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals."}, {"text": "The main disagreement is whether all of ML is part of AI, as this would mean that anyone using ML could claim they are using AI. Others have the view that not all of ML is part of AI where only an 'intelligent' subset of ML is part of AI.The question to what is the difference between ML and AI is answered by Judea Pearl in The Book of Why. Accordingly ML learns and predicts based on passive observations, whereas AI implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals."}]}, {"question": "What does SVM optimize", "positive_ctxs": [{"text": "As already discussed, SVM aims at maximizing the geometric margin and returns the corresponding hyperplane. Such points are called as support vectors (fig. - 1). Therefore, the optimization problem as defined above is equivalent to the problem of maximizing the margin value (not geometric/functional margin values)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Bayesian optimization is a sequential design strategy for global optimization of black-box functions that does not assume any functional forms. It is usually employed to optimize expensive-to-evaluate functions."}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The SVM algorithm has been widely applied in the biological and other sciences. They have been used to classify proteins with up to 90% of the compounds classified correctly. Permutation tests based on SVM weights have been suggested as a mechanism for interpretation of SVM models."}, {"text": "The SVM algorithm has been widely applied in the biological and other sciences. They have been used to classify proteins with up to 90% of the compounds classified correctly. Permutation tests based on SVM weights have been suggested as a mechanism for interpretation of SVM models."}, {"text": "The SVM algorithm has been widely applied in the biological and other sciences. They have been used to classify proteins with up to 90% of the compounds classified correctly. Permutation tests based on SVM weights have been suggested as a mechanism for interpretation of SVM models."}, {"text": "The SVM algorithm has been widely applied in the biological and other sciences. They have been used to classify proteins with up to 90% of the compounds classified correctly. Permutation tests based on SVM weights have been suggested as a mechanism for interpretation of SVM models."}]}, {"question": "What is wide and deep learning", "positive_ctxs": [{"text": "At Google, we call it Wide & Deep Learning. It's useful for generic large-scale regression and classification problems with sparse inputs (categorical features with a large number of possible feature values), such as recommender systems, search, and ranking problems."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What constitutes narrow or wide limits of agreement or large or small bias is a matter of a practical assessment in each case."}, {"text": "Torch: A scientific computing framework with wide support for machine learning algorithms, written in C and Lua. The main author is Ronan Collobert, and it is now used at Facebook AI Research and Twitter."}, {"text": "Torch: A scientific computing framework with wide support for machine learning algorithms, written in C and Lua. The main author is Ronan Collobert, and it is now used at Facebook AI Research and Twitter."}, {"text": "Torch: A scientific computing framework with wide support for machine learning algorithms, written in C and Lua. The main author is Ronan Collobert, and it is now used at Facebook AI Research and Twitter."}, {"text": "Torch: A scientific computing framework with wide support for machine learning algorithms, written in C and Lua. The main author is Ronan Collobert, and it is now used at Facebook AI Research and Twitter."}, {"text": "Torch: A scientific computing framework with wide support for machine learning algorithms, written in C and Lua. The main author is Ronan Collobert, and it is now used at Facebook AI Research and Twitter."}]}, {"question": "How do you find the Studentized residual", "positive_ctxs": [{"text": "A studentized residual is calculated by dividing the residual by an estimate of its standard deviation. The standard deviation for each residual is computed with the observation excluded. For this reason, studentized residuals are sometimes referred to as externally studentized residuals."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "Given a set of data that contains information on medical patients your goal is to find correlation for a disease. Before you can start iterating through the data ensure that you have an understanding of the result, are you looking for patients who have the disease? Are there other diseases that can be the cause?"}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "What is Poisson distribution formula", "positive_ctxs": [{"text": "Poisson Formula. P(x; \u03bc) = (e-\u03bc) (\u03bcx) / x! where x is the actual number of successes that result from the experiment, and e is approximately equal to 2.71828. The Poisson distribution has the following properties: The mean of the distribution is equal to \u03bc . The variance is also equal to \u03bc ."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The Poisson distribution is a special case of the discrete compound Poisson distribution (or stuttering Poisson distribution) with only a parameter. The discrete compound Poisson distribution can be deduced from the limiting distribution of univariate multinomial distribution. It is also a special case of a compound Poisson distribution."}, {"text": "All of the cumulants of the Poisson distribution are equal to the expected value \u03bb. The nth factorial moment of the Poisson distribution is \u03bbn."}, {"text": "The confidence interval for the mean of a Poisson distribution can be expressed using the relationship between the cumulative distribution functions of the Poisson and chi-squared distributions. The chi-squared distribution is itself closely related to the gamma distribution, and this leads to an alternative expression. Given an observation k from a Poisson distribution with mean \u03bc, a confidence interval for \u03bc with confidence level 1 \u2013 \u03b1 is"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "To display the intuition behind this statement, consider two independent Poisson processes, \u201cSuccess\u201d and \u201cFailure\u201d, with intensities p and 1 \u2212 p. Together, the Success and Failure processes are equivalent to a single Poisson process of intensity 1, where an occurrence of the process is a success if a corresponding independent coin toss comes up heads with probability p; otherwise, it is a failure. If r is a counting number, the coin tosses show that the count of successes before the rth failure follows a negative binomial distribution with parameters r and p. The count is also, however, the count of the Success Poisson process at the random time T of the rth occurrence in the Failure Poisson process. The Success count follows a Poisson distribution with mean pT, where T is the waiting time for r occurrences in a Poisson process of intensity 1 \u2212 p, i.e., T is gamma-distributed with shape parameter r and intensity 1 \u2212 p. Thus, the negative binomial distribution is equivalent to a Poisson distribution with mean pT, where the random variate T is gamma-distributed with shape parameter r and intensity (1 \u2212 p)/p."}, {"text": "To display the intuition behind this statement, consider two independent Poisson processes, \u201cSuccess\u201d and \u201cFailure\u201d, with intensities p and 1 \u2212 p. Together, the Success and Failure processes are equivalent to a single Poisson process of intensity 1, where an occurrence of the process is a success if a corresponding independent coin toss comes up heads with probability p; otherwise, it is a failure. If r is a counting number, the coin tosses show that the count of successes before the rth failure follows a negative binomial distribution with parameters r and p. The count is also, however, the count of the Success Poisson process at the random time T of the rth occurrence in the Failure Poisson process. The Success count follows a Poisson distribution with mean pT, where T is the waiting time for r occurrences in a Poisson process of intensity 1 \u2212 p, i.e., T is gamma-distributed with shape parameter r and intensity 1 \u2212 p. Thus, the negative binomial distribution is equivalent to a Poisson distribution with mean pT, where the random variate T is gamma-distributed with shape parameter r and intensity (1 \u2212 p)/p."}, {"text": "The probability distribution of the number of fixed points in a uniformly distributed random permutation approaches a Poisson distribution with expected value 1 as n grows. In particular, it is an elegant application of the inclusion\u2013exclusion principle to show that the probability that there are no fixed points approaches 1/e. When n is big enough, the probability distribution of fixed points is almost the Poisson distribution with expected value 1."}]}, {"question": "What are the applications of F test", "positive_ctxs": [{"text": "F-test is used either for testing the hypothesis about the equality of two population variances or the equality of two or more population means. The equality of two population means was dealt with t-test. Besides a t-test, we can also apply F-test for testing equality of two population means."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The textbook method is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the degrees of freedom of the numerator and the denominator and the significance level (\u03b1). If F \u2265 FCritical, the null hypothesis is rejected."}, {"text": "The textbook method is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the degrees of freedom of the numerator and the denominator and the significance level (\u03b1). If F \u2265 FCritical, the null hypothesis is rejected."}, {"text": "The textbook method is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the degrees of freedom of the numerator and the denominator and the significance level (\u03b1). If F \u2265 FCritical, the null hypothesis is rejected."}, {"text": "The textbook method is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the degrees of freedom of the numerator and the denominator and the significance level (\u03b1). If F \u2265 FCritical, the null hypothesis is rejected."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "In multilinear algebra, one considers multivariable linear transformations, that is, mappings that are linear in each of a number of different variables. This line of inquiry naturally leads to the idea of the dual space, the vector space V\u2217 consisting of linear maps f: V \u2192 F where F is the field of scalars. Multilinear maps T: Vn \u2192 F can be described via tensor products of elements of V\u2217."}, {"text": "Standard Univariate ANOVA F test\u2014This test is commonly used given only two levels of the within-subjects factor (i.e. time point 1 and time point 2). This test is not recommended given more than 2 levels of the within-subjects factor because the assumption of sphericity is commonly violated in such cases."}]}, {"question": "How do you interpret confidence intervals and odds ratio", "positive_ctxs": [{"text": "The value of the odds ratio tells you how much more likely someone under 25 might be to make a claim, for example, and the associated confidence interval indicates the degree of uncertainty associated with that ratio."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "In this case, the odds ratio equals one, and conversely the odds ratio can only equal one if the joint probabilities can be factored in this way. Thus the odds ratio equals one if and only if X and Y are independent."}, {"text": "The simplest measure of association for a 2 \u00d7 2 contingency table is the odds ratio. Given two events, A and B, the odds ratio is defined as the ratio of the odds of A in the presence of B and the odds of A in the absence of B, or equivalently (due to symmetry), the ratio of the odds of B in the presence of A and the odds of B in the absence of A. Two events are independent if and only if the odds ratio is 1; if the odds ratio is greater than 1, the events are positively associated; if the odds ratio is less than 1, the events are negatively associated."}, {"text": "The simplest measure of association for a 2 \u00d7 2 contingency table is the odds ratio. Given two events, A and B, the odds ratio is defined as the ratio of the odds of A in the presence of B and the odds of A in the absence of B, or equivalently (due to symmetry), the ratio of the odds of B in the presence of A and the odds of B in the absence of A. Two events are independent if and only if the odds ratio is 1; if the odds ratio is greater than 1, the events are positively associated; if the odds ratio is less than 1, the events are negatively associated."}, {"text": "The simplest measure of association for a 2 \u00d7 2 contingency table is the odds ratio. Given two events, A and B, the odds ratio is defined as the ratio of the odds of A in the presence of B and the odds of A in the absence of B, or equivalently (due to symmetry), the ratio of the odds of B in the presence of A and the odds of B in the absence of A. Two events are independent if and only if the odds ratio is 1; if the odds ratio is greater than 1, the events are positively associated; if the odds ratio is less than 1, the events are negatively associated."}, {"text": ".This is an asymptotic approximation, and will not give a meaningful result if any of the cell counts are very small. If L is the sample log odds ratio, an approximate 95% confidence interval for the population log odds ratio is L \u00b1 1.96SE. This can be mapped to exp(L \u2212 1.96SE), exp(L + 1.96SE) to obtain a 95% confidence interval for the odds ratio."}, {"text": "The odds ratio is a function of the cell probabilities, and conversely, the cell probabilities can be recovered given knowledge of the odds ratio and the marginal probabilities P(X = 1) = p11 + p10 and P(Y = 1) = p11 + p01. If the odds ratio R differs from 1, then"}]}, {"question": "Why is cross entropy used for classification", "positive_ctxs": [{"text": "Cross Entropy is definitely a good loss function for Classification Problems, because it minimizes the distance between two probability distributions - predicted and actual. So cross entropy make sure we are minimizing the difference between the two probability. This is the reason."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The cross entropy loss is closely related to the Kullback\u2013Leibler divergence between the empirical distribution and the predicted distribution. The cross entropy loss is ubiquitous in modern deep neural networks."}, {"text": "The cross entropy has been used as an error metric to measure the distance between two hypotheses. Its absolute value is minimum when the two distributions are identical. It is the information measure most closely related to the log maximum likelihood (see section on \"Parameter estimation."}, {"text": "The conditional quantum entropy is an entropy measure used in quantum information theory. It is a generalization of the conditional entropy of classical information theory."}, {"text": "Jaynes stated Bayes' theorem was a way to calculate a probability, while maximum entropy was a way to assign a prior probability distribution.It is however, possible in concept to solve for a posterior distribution directly from a stated prior distribution using the principle of minimum cross entropy (or the Principle of Maximum Entropy being a special case of using a uniform distribution as the given prior), independently of any Bayesian considerations by treating the problem formally as a constrained optimisation problem, the Entropy functional being the objective function. For the case of given average values as testable information (averaged over the sought after probability distribution), the sought after distribution is formally the Gibbs (or Boltzmann) distribution the parameters of which must be solved for in order to achieve minimum cross entropy and satisfy the given testable information."}, {"text": "Jaynes stated Bayes' theorem was a way to calculate a probability, while maximum entropy was a way to assign a prior probability distribution.It is however, possible in concept to solve for a posterior distribution directly from a stated prior distribution using the principle of minimum cross entropy (or the Principle of Maximum Entropy being a special case of using a uniform distribution as the given prior), independently of any Bayesian considerations by treating the problem formally as a constrained optimisation problem, the Entropy functional being the objective function. For the case of given average values as testable information (averaged over the sought after probability distribution), the sought after distribution is formally the Gibbs (or Boltzmann) distribution the parameters of which must be solved for in order to achieve minimum cross entropy and satisfy the given testable information."}, {"text": "Akaike information criterion (AIC) method of model selection, and a comparison with MML: Dowe, D.L. ; Gardner, S.; Oppy, G. (Dec 2007). Why Simplicity is no Problem for Bayesians\"."}, {"text": "The notion of a \"center\" as minimizing variation can be generalized in information geometry as a distribution that minimizes divergence (a generalized distance) from a data set. The most common case is maximum likelihood estimation, where the maximum likelihood estimate (MLE) maximizes likelihood (minimizes expected surprisal), which can be interpreted geometrically by using entropy to measure variation: the MLE minimizes cross entropy (equivalently, relative entropy, Kullback\u2013Leibler divergence)."}]}, {"question": "What is the difference between a class boundary in a class limit", "positive_ctxs": [{"text": "Class limits specify the span of data values that fall within a class. Class boundaries are values halfway between the upper class limit of one class and the lower class limit of the next. Class limits are not possible data values. Class boundaries specify the span of data values that fall within a class."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If a learner is required to be effective, then an indexed class of recursive languages is learnable in the limit if there is an effective procedure that uniformly enumerates tell-tales for each language in the class (Condition 1). It is not hard to see that if an ideal learner (i.e., an arbitrary function) is allowed, then an indexed class of languages is learnable in the limit if each language in the class has a tell-tale (Condition 2)."}, {"text": "The limit mentioned above is user definable. A larger limit will allow a greater difference between successive threshold values. Advantages of this can be quicker execution but with a less clear boundary between background and foreground."}, {"text": "The observed data can be arranged in classes or groups with serial number k. Each group has a lower limit (Lk) and an upper limit (Uk). When the class (k) contains mk data and the total number of data is N, then the relative class or group frequency is found from:"}, {"text": "The table shows which language classes are identifiable in the limit in which learning model. On the right-hand side, each language class is a superclass of all lower classes. type of presentation) can identify in the limit all classes below it."}, {"text": "The Kolmogorov structure function of an individual data string expresses the relation between the complexity level constraint on a model class and the least log-cardinality of a model in the class containing the data. The structure function determines all stochastic properties of the individual data string: for every constrained model class it determines the individual best-fitting model in the class irrespective of whether the true model is in the model class considered or not. In the classical case we talk about a set of data with a probability distribution, and the properties are those of the expectations."}, {"text": "Despite the fact that the professional (upper) middle class is a privileged minority, it is the perhaps the most influential class in the United States."}, {"text": "The underlying issue is that there is a class imbalance between the positive class and the negative class. Prior probabilities for these classes need to be accounted for in error analysis. Precision and recall help, but precision too can be biased by very unbalanced class priors in the test sets."}]}, {"question": "What is unsupervised feature learning", "positive_ctxs": [{"text": "Unsupervised feature learning is learning features from unlabeled data. The goal of unsupervised feature learning is often to discover low-dimensional features that captures some structure underlying the high-dimensional input data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Unsupervised feature learning is learning features from unlabeled data. The goal of unsupervised feature learning is often to discover low-dimensional features that capture some structure underlying the high-dimensional input data. When the feature learning is performed in an unsupervised way, it enables a form of semisupervised learning where features learned from an unlabeled dataset are then employed to improve performance in a supervised setting with labeled data."}, {"text": "Unsupervised feature learning is learning features from unlabeled data. The goal of unsupervised feature learning is often to discover low-dimensional features that capture some structure underlying the high-dimensional input data. When the feature learning is performed in an unsupervised way, it enables a form of semisupervised learning where features learned from an unlabeled dataset are then employed to improve performance in a supervised setting with labeled data."}, {"text": "Unsupervised feature learning is learning features from unlabeled data. The goal of unsupervised feature learning is often to discover low-dimensional features that capture some structure underlying the high-dimensional input data. When the feature learning is performed in an unsupervised way, it enables a form of semisupervised learning where features learned from an unlabeled dataset are then employed to improve performance in a supervised setting with labeled data."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The only requirement to be called an unsupervised learning strategy is to learn a new feature space that captures the characteristics of the original space by maximizing some objective function or minimising some loss function. Therefore, generating a covariance matrix is not unsupervised learning, but taking the eigenvectors of the covariance matrix is because the linear algebra eigendecomposition operation maximizes the variance; this is known as principal component analysis. Similarly, taking the log-transform of a dataset is not unsupervised learning, but passing input data through multiple sigmoid functions while minimising some distance function between the generated and resulting data is, and is known as an Autoencoder."}, {"text": "The only requirement to be called an unsupervised learning strategy is to learn a new feature space that captures the characteristics of the original space by maximizing some objective function or minimising some loss function. Therefore, generating a covariance matrix is not unsupervised learning, but taking the eigenvectors of the covariance matrix is because the linear algebra eigendecomposition operation maximizes the variance; this is known as principal component analysis. Similarly, taking the log-transform of a dataset is not unsupervised learning, but passing input data through multiple sigmoid functions while minimising some distance function between the generated and resulting data is, and is known as an Autoencoder."}, {"text": "The only requirement to be called an unsupervised learning strategy is to learn a new feature space that captures the characteristics of the original space by maximizing some objective function or minimising some loss function. Therefore, generating a covariance matrix is not unsupervised learning, but taking the eigenvectors of the covariance matrix is because the linear algebra eigendecomposition operation maximizes the variance; this is known as principal component analysis. Similarly, taking the log-transform of a dataset is not unsupervised learning, but passing input data through multiple sigmoid functions while minimising some distance function between the generated and resulting data is, and is known as an Autoencoder."}]}, {"question": "What does it mean if a test is not statistically significant", "positive_ctxs": [{"text": "Statistically significant means a result is unlikely due to chance. The p-value is the probability of obtaining the difference we saw from a sample (or a larger one) if there really isn't a difference for all users. Statistical significance doesn't mean practical significance."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Referring to statistical significance does not necessarily mean that the overall result is significant in real world terms. For example, in a large study of a drug it may be shown that the drug has a statistically significant but very small beneficial effect, such that the drug is unlikely to help the patient noticeably."}, {"text": "Referring to statistical significance does not necessarily mean that the overall result is significant in real world terms. For example, in a large study of a drug it may be shown that the drug has a statistically significant but very small beneficial effect, such that the drug is unlikely to help the patient noticeably."}, {"text": "Referring to statistical significance does not necessarily mean that the overall result is significant in real world terms. For example, in a large study of a drug it may be shown that the drug has a statistically significant but very small beneficial effect, such that the drug is unlikely to help the patient noticeably."}, {"text": "Referring to statistical significance does not necessarily mean that the overall result is significant in real world terms. For example, in a large study of a drug it may be shown that the drug has a statistically significant but very small beneficial effect, such that the drug is unlikely to help the patient noticeably."}, {"text": "Referring to statistical significance does not necessarily mean that the overall result is significant in real world terms. For example, in a large study of a drug it may be shown that the drug has a statistically significant but very small beneficial effect, such that the drug is unlikely to help the patient noticeably."}, {"text": "ANOVA is a form of statistical hypothesis testing heavily used in the analysis of experimental data. A test result (calculated from the null hypothesis and the sample) is called statistically significant if it is deemed unlikely to have occurred by chance, assuming the truth of the null hypothesis. A statistically significant result, when a probability (p-value) is less than a pre-specified threshold (significance level), justifies the rejection of the null hypothesis, but only if the a priori probability of the null hypothesis is not high."}, {"text": "ANOVA is a form of statistical hypothesis testing heavily used in the analysis of experimental data. A test result (calculated from the null hypothesis and the sample) is called statistically significant if it is deemed unlikely to have occurred by chance, assuming the truth of the null hypothesis. A statistically significant result, when a probability (p-value) is less than a pre-specified threshold (significance level), justifies the rejection of the null hypothesis, but only if the a priori probability of the null hypothesis is not high."}]}, {"question": "Why do we normalize a feature", "positive_ctxs": [{"text": "Motivation. Since the range of values of raw data varies widely, in some machine learning algorithms, objective functions will not work properly without normalization. Therefore, the range of all features should be normalized so that each feature contributes approximately proportionately to the final distance."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "\"The art of a right decision: Why decision makers want to know the odds-algorithm.\" Newsletter of the European Mathematical Society, Issue 62, 14\u201320, (2006)"}, {"text": "Instead of maintaining a dictionary, a feature vectorizer that uses the hashing trick can build a vector of a pre-defined length by applying a hash function h to the features (e.g., words), then using the hash values directly as feature indices and updating the resulting vector at those indices. Here, we assume that feature actually means feature vector."}, {"text": "\"Marvin Minsky writes \"This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence? \"Nick Bostrom observes that \"A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore.\""}, {"text": "Akaike information criterion (AIC) method of model selection, and a comparison with MML: Dowe, D.L. ; Gardner, S.; Oppy, G. (Dec 2007). Why Simplicity is no Problem for Bayesians\"."}, {"text": "It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the autocovariance function to get a time-dependent Pearson correlation coefficient. However, in other disciplines (e.g."}, {"text": "It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the autocovariance function to get a time-dependent Pearson correlation coefficient. However, in other disciplines (e.g."}, {"text": "It is an anomaly for a small city to field such a good team. the soccer scores and great soccer team) indirectly described a condition by which the observer inferred a new meaningful pattern\u2014that the small city was no longer small. Why would you put a large city of your best and brightest in the middle of nowhere?"}]}, {"question": "In what real world applications is Naive Bayes classifier used", "positive_ctxs": [{"text": "Naive Bayes uses a similar method to predict the probability of different class based on various attributes. This algorithm is mostly used in text classification and with problems having multiple classes."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Naive Bayes is a successful classifier based upon the principle of maximum a posteriori (MAP). This approach is naturally extensible to the case of having more than two classes, and was shown to perform well in spite of the underlying simplifying assumption of conditional independence."}, {"text": "Naive Bayes is a successful classifier based upon the principle of maximum a posteriori (MAP). This approach is naturally extensible to the case of having more than two classes, and was shown to perform well in spite of the underlying simplifying assumption of conditional independence."}, {"text": "In natural language processing, multinomial LR classifiers are commonly used as an alternative to naive Bayes classifiers because they do not assume statistical independence of the random variables (commonly known as features) that serve as predictors. However, learning in such a model is slower than for a naive Bayes classifier, and thus may not be appropriate given a very large number of classes to learn. In particular, learning in a Naive Bayes classifier is a simple matter of counting up the number of co-occurrences of features and classes, while in a maximum entropy classifier the weights, which are typically maximized using maximum a posteriori (MAP) estimation, must be learned using an iterative procedure; see #Estimating the coefficients."}, {"text": "In natural language processing, multinomial LR classifiers are commonly used as an alternative to naive Bayes classifiers because they do not assume statistical independence of the random variables (commonly known as features) that serve as predictors. However, learning in such a model is slower than for a naive Bayes classifier, and thus may not be appropriate given a very large number of classes to learn. In particular, learning in a Naive Bayes classifier is a simple matter of counting up the number of co-occurrences of features and classes, while in a maximum entropy classifier the weights, which are typically maximized using maximum a posteriori (MAP) estimation, must be learned using an iterative procedure; see #Estimating the coefficients."}, {"text": "adding computer vision, incorporating AR cameras into smartphone applications and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulated. Information about the environment and its objects is overlaid on the real world. This information can be virtual or real, e.g."}, {"text": "MAS have not only been applied in academic research, but also in industry. MAS are applied in the real world to graphical applications such as computer games. Agent systems have been used in films."}, {"text": "Naive Bayes classifier with multinomial or multivariate Bernoulli event models.The second set of methods includes discriminative models, which attempt to maximize the quality of the output on a training set. Additional terms in the training cost function can easily perform regularization of the final model. Examples of discriminative training of linear classifiers include:"}]}, {"question": "How is deep Q learning implemented", "positive_ctxs": [{"text": "Implementing Deep Q-Learning using TensorflowPrerequisites: Deep Q-Learning.Step 1: Importing the required libraries.Step 2: Building the Environment.Step 3: Building the learning agent.Step 4: Finding the Optimal Strategy.The agent tries different methods to reach the top and thus gaining knowledge from each episode.Step 5: Testing the Learning Agent.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The DeepMind system used a deep convolutional neural network, with layers of tiled convolutional filters to mimic the effects of receptive fields. Reinforcement learning is unstable or divergent when a nonlinear function approximator such as a neural network is used to represent Q. This instability comes from the correlations present in the sequence of observations, the fact that small updates to Q may significantly change the policy and the data distribution, and the correlations between Q and the target values."}, {"text": "The form shows that inference from P implies Q to the negation of Q implies the negation of P is a valid argument."}, {"text": "The Hebbian rule is both local and incremental. For the Hopfield networks, it is implemented in the following manner, when learning"}, {"text": "A main criticism concerns the lack of theory surrounding some methods. Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear."}, {"text": "A main criticism concerns the lack of theory surrounding some methods. Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear."}, {"text": "A main criticism concerns the lack of theory surrounding some methods. Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear."}, {"text": "A main criticism concerns the lack of theory surrounding some methods. Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear."}]}, {"question": "Why Q learning is off policy", "positive_ctxs": [{"text": "Q-learning is called off-policy because the updated policy is different from the behavior policy, so Q-Learning is off-policy. In other words, it estimates the reward for future actions and appends a value to the new state without actually following any greedy policy."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The DeepMind system used a deep convolutional neural network, with layers of tiled convolutional filters to mimic the effects of receptive fields. Reinforcement learning is unstable or divergent when a nonlinear function approximator such as a neural network is used to represent Q. This instability comes from the correlations present in the sequence of observations, the fact that small updates to Q may significantly change the policy and the data distribution, and the correlations between Q and the target values."}, {"text": "Because the future maximum approximated action value in Q-learning is evaluated using the same Q function as in current action selection policy, in noisy environments Q-learning can sometimes overestimate the action values, slowing the learning. A variant called Double Q-learning was proposed to correct this. Double Q-learning is an off-policy reinforcement learning algorithm, where a different policy is used for value evaluation than what is used to select the next action."}, {"text": "The form shows that inference from P implies Q to the negation of Q implies the negation of P is a valid argument."}, {"text": "An important distinction in RL is the difference between on-policy algorithms that require evaluating or improving the policy that collects data, and off-policy algorithms that can learn a policy from data generated by an arbitrary policy. Generally, value-function based methods such as Q-learning are better suited for off-policy learning and have better sample-efficiency - the amount of data required to learn a task is reduced because data is re-used for learning. At the extreme, offline (or \"batch\") RL considers learning a policy from a fixed dataset without additional interaction with the environment."}, {"text": "Select a random subset Q of [n] containing m elements and a random permutation, and ask about the probability that all elements of Q lie on the same cycle. This is another average parameter. The function b(k) is equal to"}, {"text": "A policy that achieves these optimal values in each state is called optimal. Clearly, a policy that is optimal in this strong sense is also optimal in the sense that it maximizes the expected return"}, {"text": "A policy that achieves these optimal values in each state is called optimal. Clearly, a policy that is optimal in this strong sense is also optimal in the sense that it maximizes the expected return"}]}, {"question": "How do you calculate z score normalization", "positive_ctxs": [{"text": "The formula for calculating a z-score is is z = (x-\u03bc)/\u03c3, where x is the raw score, \u03bc is the population mean, and \u03c3 is the population standard deviation. As the formula shows, the z-score is simply the raw score minus the population mean, divided by the population standard deviation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "\u03c3 is the standard deviation of the population.The absolute value of z represents the distance between that raw score x and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}]}, {"question": "How do you know if Arima model is accurate", "positive_ctxs": [{"text": "How to find accuracy of ARIMA model?Problem description: Prediction on CPU utilization. Step 1: From Elasticsearch I collected 1000 observations and exported on Python.Step 2: Plotted the data and checked whether data is stationary or not.Step 3: Used log to convert the data into stationary form.Step 4: Done DF test, ACF and PACF.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "The following question was posed to Jeff Hawkins in September 2011 with regard to cortical learning algorithms: \"How do you know if the changes you are making to the model are good or not?\" To which Jeff's response was \"There are two categories for the answer: one is to look at neuroscience, and the other is methods for machine intelligence. In the neuroscience realm, there are many predictions that we can make, and those can be tested."}, {"text": "Economist Paul Krugman agrees mostly with the Rawlsian approach in that he would like to \"create the society each of us would want if we didn\u2019t know in advance who we\u2019d be\". Krugman elaborated: \"If you admit that life is unfair, and that there's only so much you can do about that at the starting line, then you can try to ameliorate the consequences of that unfairness\"."}, {"text": "Suppose the police officers then stop a driver at random to administer a breathalyzer test. It indicates that the driver is drunk. We assume you do not know anything else about them."}, {"text": "If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute."}, {"text": "To conduct a Bayes linear analysis it is necessary to identify some values that you expect to know shortly by making measurements D and some future value which you would like to know B. Here D refers to a vector containing data and B to a vector containing quantities you would like to predict. For the following example B and D are taken to be two-dimensional vectors i.e."}]}, {"question": "How do you find the similarity between two documents", "positive_ctxs": [{"text": "Generally a cosine similarity between two documents is used as a similarity measure of documents. In Java, you can use Lucene (if your collection is pretty large) or LingPipe to do this. The basic concept would be to count the terms in every document and calculate the dot product of the term vectors."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "In the case of information retrieval, the cosine similarity of two documents will range from 0 to 1, since the term frequencies (using tf\u2013idf weights) cannot be negative. The angle between two term frequency vectors cannot be greater than 90\u00b0."}, {"text": "In the case of information retrieval, the cosine similarity of two documents will range from 0 to 1, since the term frequencies (using tf\u2013idf weights) cannot be negative. The angle between two term frequency vectors cannot be greater than 90\u00b0."}, {"text": "CL-ESA exploits a document-aligned multilingual reference collection (e.g., again, Wikipedia) to represent a document as a language-independent concept vector. The relatedness of two documents in different languages is assessed by the cosine similarity between the corresponding vector representations."}]}, {"question": "What does a significance level of 0.01 mean", "positive_ctxs": [{"text": "The significance level for a given hypothesis test is a value for which a P-value less than or equal to is considered statistically significant. Typical values for are 0.1, 0.05, and 0.01. These values correspond to the probability of observing such an extreme value by chance."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, a sample Pearson correlation coefficient of 0.01 is statistically significant if the sample size is 1000. Reporting only the significant p-value from this analysis could be misleading if a correlation of 0.01 is too small to be of interest in a particular application."}, {"text": "(the significance level corresponding to the cutoff bound). However, if testing for whether the coin is biased towards heads or tails, a two-tailed test would be used, and a data set of five heads (sample mean 1) is as extreme as a data set of five tails (sample mean 0). As a result, the p-value would be"}, {"text": "(the significance level corresponding to the cutoff bound). However, if testing for whether the coin is biased towards heads or tails, a two-tailed test would be used, and a data set of five heads (sample mean 1) is as extreme as a data set of five tails (sample mean 0). As a result, the p-value would be"}, {"text": "Admittedly, such a misinterpretation is encouraged by the word 'confidence'. \"A 95% confidence level does not mean that 95% of the sample data lie within the confidence interval."}, {"text": "Admittedly, such a misinterpretation is encouraged by the word 'confidence'. \"A 95% confidence level does not mean that 95% of the sample data lie within the confidence interval."}, {"text": "The type I error rate is often associated with the a-priori setting of the significance level by the researcher: the significance level represents an acceptable error rate considering that all null hypotheses are true (the \"global null\" hypothesis). The choice of a significance level may thus be somewhat arbitrary (i.e. setting 10% (0.1), 5% (0.05), 1% (0.01) etc."}, {"text": "The type I error rate is often associated with the a-priori setting of the significance level by the researcher: the significance level represents an acceptable error rate considering that all null hypotheses are true (the \"global null\" hypothesis). The choice of a significance level may thus be somewhat arbitrary (i.e. setting 10% (0.1), 5% (0.05), 1% (0.01) etc."}]}, {"question": "What are the different classifiers in machine learning", "positive_ctxs": [{"text": "We will learn Classification algorithms, types of classification algorithms, support vector machines(SVM), Naive Bayes, Decision Tree and Random Forest Classifier in this tutorial."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification algorithms.ROC curves are also used in verification of forecasts in meteorology."}, {"text": "ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification algorithms.ROC curves are also used in verification of forecasts in meteorology."}, {"text": "ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification algorithms.ROC curves are also used in verification of forecasts in meteorology."}, {"text": "ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification algorithms.ROC curves are also used in verification of forecasts in meteorology."}, {"text": "ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification algorithms.ROC curves are also used in verification of forecasts in meteorology."}, {"text": "ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification algorithms.ROC curves are also used in verification of forecasts in meteorology."}, {"text": "Machine learning approaches in particular can suffer from different data biases. A machine learning system trained on current customers only may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on man-made data, machine learning is likely to pick up the same constitutional and unconscious biases already present in society."}]}, {"question": "What do you mean accuracy", "positive_ctxs": [{"text": "the condition or quality of being true, correct, or exact; freedom from error or defect; precision or exactness; correctness. Chemistry, Physics. the extent to which a given measurement agrees with the standard value for that measurement. Compare precision (def. 6)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What is more there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.A simple example of modus ponens often used in introductory logic books is \"If you are human then you are mortal\". This can be represented in pseudocode as:"}, {"text": "You are allowed to select k of these n boxes all at once and break them open simultaneously, gaining access to k keys. What is the probability that using these keys you can open all n boxes, where you use a found key to open the box it belongs to and repeat."}, {"text": "bigger than x, it does not necessarily mean you have made it plausible that it is smaller or equal than x; alternatively you may just have done a lousy measurement with low accuracy. Confirming the null hypothesis two-sided would amount to positively proving it is bigger or equal than 0 AND to positively proving it is smaller or equal than 0; this is something for which you need infinite accuracy as well as exactly zero effect neither of which normally are realistic. Also measurements will never indicate a non-zero probability of exactly zero difference.)"}, {"text": "Aspect is unusual in ASL in that transitive verbs derived for aspect lose their transitivity. That is, while you can sign 'dog chew bone' for the dog chewed on a bone, or 'she look-at me' for she looked at me, you cannot do the same in the durative to mean the dog gnawed on the bone or she stared at me. Instead, you must use other strategies, such as a topic construction (see below) to avoid having an object for the verb."}, {"text": "These values lead to the following performance scores: accuracy = 95%, and F1 score = 97.44%. By reading these over-optimistic scores, then you will be very happy and will think that your machine learning algorithm is doing an excellent job. Obviously, you would be on the wrong track."}, {"text": "It is also possible that no mean exists. Consider a color wheel\u2014there is no mean to the set of all colors. In these situations, you must decide which mean is most useful."}]}, {"question": "What does self selection bias mean", "positive_ctxs": [{"text": "In statistics, self-selection bias arises in any situation in which individuals select themselves into a group, causing a biased sample with nonprobability sampling. In such fields, a poll suffering from such bias is termed a self-selected listener opinion poll or \"SLOP\"."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What constitutes narrow or wide limits of agreement or large or small bias is a matter of a practical assessment in each case."}, {"text": "Participation bias or non-response bias is a phenomenon in which the results of elections, studies, polls, etc. become non-representative because the participants disproportionately possess certain traits which affect the outcome. These traits mean the sample is systematically different from the target population, potentially resulting in biased estimates.For instance, a study found that those who refused to answer a survey on AIDS tended to be \"older, attend church more often, are less likely to believe in the confidentiality of surveys, and have lower sexual self disclosure.\""}, {"text": "Participation bias or non-response bias is a phenomenon in which the results of elections, studies, polls, etc. become non-representative because the participants disproportionately possess certain traits which affect the outcome. These traits mean the sample is systematically different from the target population, potentially resulting in biased estimates.For instance, a study found that those who refused to answer a survey on AIDS tended to be \"older, attend church more often, are less likely to believe in the confidentiality of surveys, and have lower sexual self disclosure.\""}, {"text": "A problem arises where an intelligent agent's prior expectations interact with the environment to form a self reinforcing feed back loop. This is the problem of bias or prejudice. Universal priors reduce but do not eliminate this problem."}, {"text": "The factors leading to the optimistic bias can be categorized into four different groups: desired end states of comparative judgment, cognitive mechanisms, information about the self versus a target, and underlying affect. These are explained more in detail below."}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}]}, {"question": "What does a normality test show", "positive_ctxs": [{"text": "A normality test is used to determine whether sample data has been drawn from a normally distributed population (within some tolerance). A number of statistical tests, such as the Student's t-test and the one-way and two-way ANOVA require a normally distributed sample population."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Multivariate normality tests check a given set of data for similarity to the multivariate normal distribution. The null hypothesis is that the data set is similar to the normal distribution, therefore a sufficiently small p-value indicates non-normal data. Multivariate normality tests include the Cox\u2013Small test"}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "to test for normality of residuals, to test whether two samples are drawn from identical distributions (see Kolmogorov\u2013Smirnov test), or whether outcome frequencies follow a specified distribution (see Pearson's chi-squared test). In the analysis of variance, one of the components into which the variance is partitioned may be a lack-of-fit sum of squares."}, {"text": "to test for normality of residuals, to test whether two samples are drawn from identical distributions (see Kolmogorov\u2013Smirnov test), or whether outcome frequencies follow a specified distribution (see Pearson's chi-squared test). In the analysis of variance, one of the components into which the variance is partitioned may be a lack-of-fit sum of squares."}, {"text": "Sample-based effect sizes are distinguished from test statistics used in hypothesis testing, in that they estimate the strength (magnitude) of, for example, an apparent relationship, rather than assigning a significance level reflecting whether the magnitude of the relationship observed could be due to chance. The effect size does not directly determine the significance level, or vice versa. Given a sufficiently large sample size, a non-null statistical comparison will always show a statistically significant result unless the population effect size is exactly zero (and even there it will show statistical significance at the rate of the Type I error used)."}, {"text": "Face validity is an estimate of whether a test appears to measure a certain criterion; it does not guarantee that the test actually measures phenomena in that domain. Measures may have high validity, but when the test does not appear to be measuring what it is, it has low face validity. Indeed, when a test is subject to faking (malingering), low face validity might make the test more valid."}, {"text": "A test with 100% specificity will recognize all patients without the disease by testing negative, so a positive test result would definitely rule in the presence of the disease. However, a negative result from a test with a high specificity is not necessarily useful for ruling out disease. For example, a test that always returns a negative test result will have a specificity of 100% because specificity does not consider false negatives."}]}, {"question": "Whats the advantage of importance sampling", "positive_ctxs": [{"text": "Importance sampling is a useful technique for investigating the properties of a distri- bution while only having samples drawn from a different (proposal) distribution."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Sequential importance sampling (SIS) is a sequential (i.e., recursive) version of importance sampling. As in importance sampling, the expectation of a function f can be approximated as a weighted average"}, {"text": "Importance sampling provides a very important tool to perform Monte-Carlo integration. The main result of importance sampling to this method is that the uniform sampling of"}, {"text": "Hence, the basic methodology in importance sampling is to choose a distribution which \"encourages\" the important values. This use of \"biased\" distributions will result in a biased estimator if it is applied directly in the simulation. However, the simulation outputs are weighted to correct for the use of the biased distribution, and this ensures that the new importance sampling estimator is unbiased."}, {"text": "In statistics, importance sampling is a general technique for estimating properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. It is related to umbrella sampling in computational physics. Depending on the application, the term may refer to the process of sampling from this alternative distribution, the process of inference, or both."}, {"text": "The fundamental issue in implementing importance sampling simulation is the choice of the biased distribution which encourages the important regions of the input variables. Choosing or designing a good biased distribution is the \"art\" of importance sampling. The rewards for a good distribution can be huge run-time savings; the penalty for a bad distribution can be longer run times than for a general Monte Carlo simulation without importance sampling."}, {"text": "possibly infinite memory (adaptive equalizers)In principle, the importance sampling ideas remain the same in these situations, but the design becomes much harder. A successful approach to combat this problem is essentially breaking down a simulation into several smaller, more sharply defined subproblems. Then importance sampling strategies are used to target each of the simpler subproblems."}, {"text": "The sequential importance resampling technique provides another interpretation of the filtering transitions coupling importance sampling with the bootstrap resampling step. Last, but not least, particle filters can be seen as an acceptance-rejection methodology equipped with a recycling mechanism."}]}, {"question": "What is the purpose of batch normalization", "positive_ctxs": [{"text": "Batch normalization (also known as batch norm) is a method used to make artificial neural networks faster and more stable through normalization of the input layer by re-centering and re-scaling."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The correlation between batch normalization and internal covariate shift is widely accepted but was not supported by experimental results. Scholars recently show with experiments that the hypothesized relationship is not an accurate one. Rather, the enhanced accuracy with the batch normalization layer seems to be independent of internal covariate shift."}, {"text": "Batch normalization (also known as batch norm) is a method used to make artificial neural networks faster and more stable through normalization of the input layer by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015.While the effect of batch normalization is evident, the reasons behind its effectiveness remain under discussion. It was believed that it can mitigate the problem of internal covariate shift, where parameter initialization and changes in the distribution of the inputs of each layer affect the learning rate of the network."}, {"text": "Besides analyzing this correlation experimentally, theoretical analysis is also provided for verification that batch normalization could result in a smoother landscape. Consider two identical networks, one contains batch normalization layers and the other doesn't, the behaviors of these two networks are then compared. Denote the loss functions as"}, {"text": "In a neural network, batch normalization is achieved through a normalization step that fixes the means and variances of each layer's inputs. Ideally, the normalization would be conducted over the entire training set, but to use this step jointly with stochastic optimization methods, it is impractical to use the global information. Thus, normalization is restrained to each mini-batch in the training process."}, {"text": "The correlation between the gradients are computed for four models: a standard VGG network, a VGG network with batch normalization layers, a 25-layer deep linear network (DLN) trained with full-batch gradient descent, and a DLN network with batch normalization layers. Interestingly, it is shown that the standard VGG and DLN models both have higher correlations of gradients compared with their counterparts, indicating that the additional batch normalization layers are not reducing internal covariate shift."}, {"text": "is in the direction towards the minimum of the loss. It could thus be concluded from this inequality that the gradient generally becomes more predictive with the batch normalization layer."}, {"text": "Therefore, the method of batch normalization is proposed to reduce these unwanted shifts to speed up training and to produce more reliable models."}]}, {"question": "What is hinge loss in machine learning", "positive_ctxs": [{"text": "In machine learning, the hinge loss is a loss function used for training classifiers. The hinge loss is used for \"maximum-margin\" classification, most notably for support vector machines (SVMs). For an intended output t = \u00b11 and a classifier score y, the hinge loss of the prediction y is defined as."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In machine learning, the hinge loss is a loss function used for training classifiers. The hinge loss is used for \"maximum-margin\" classification, most notably for support vector machines (SVMs).For an intended output t = \u00b11 and a classifier score y, the hinge loss of the prediction y is defined as"}, {"text": "In machine learning, the hinge loss is a loss function used for training classifiers. The hinge loss is used for \"maximum-margin\" classification, most notably for support vector machines (SVMs).For an intended output t = \u00b11 and a classifier score y, the hinge loss of the prediction y is defined as"}, {"text": "The hinge loss is a convex function, so many of the usual convex optimizers used in machine learning can work with it. It is not differentiable, but has a subgradient with respect to model parameters w of a linear SVM with score function"}, {"text": "The hinge loss is a convex function, so many of the usual convex optimizers used in machine learning can work with it. It is not differentiable, but has a subgradient with respect to model parameters w of a linear SVM with score function"}, {"text": "it is also possible to extend the hinge loss itself for such an end. Several different variations of multiclass hinge loss have been proposed. For example, Crammer and Singer"}, {"text": "it is also possible to extend the hinge loss itself for such an end. Several different variations of multiclass hinge loss have been proposed. For example, Crammer and Singer"}, {"text": "The hinge loss provides a relatively tight, convex upper bound on the 0\u20131 indicator function. Specifically, the hinge loss equals the 0\u20131 indicator function when"}]}, {"question": "What type of learning is involved in Adaptive Resonance Theory", "positive_ctxs": [{"text": "Adaptive resonance theory is a type of neural network technique developed by Stephen Grossberg and Gail Carpenter in 1987. The basic ART uses unsupervised learning technique."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "LAPART The Laterally Primed Adaptive Resonance Theory (LAPART) neural networks couple two Fuzzy ART algorithms to create a mechanism for making predictions based on learned associations. The coupling of the two Fuzzy ARTs has a unique stability that allows the system to converge rapidly towards a clear solution. Additionally, it can perform logical inference and supervised learning similar to fuzzy ARTMAP."}, {"text": "Inferential Theory of Learning (ITL) is an area of machine learning which describes inferential processes performed by learning agents. ITL has been continuously developed by Ryszard S. Michalski, starting in the 1980s. The first known publication of ITL was in 1983."}, {"text": "It has been shown that Hebb's rule in its basic form is unstable. Oja's Rule, BCM Theory are other learning rules built on top of or alongside of the Hebb's Rule in the study of biological neurons."}, {"text": "Keeping in mind that LCS is a paradigm for genetic-based machine learning rather than a specific method, the following outlines key elements of a generic, modern (i.e. For simplicity let us focus on Michigan-style architecture with supervised learning. See the illustrations on the right laying out the sequential steps involved in this type of generic LCS."}, {"text": "Theory is typically drawn from the literature in the learning sciences, education, psychology, sociology, and philosophy. The design dimension of the model includes: learning design, interaction design, and study design."}, {"text": "Dimensional analysis is also used to derive relationships between the physical quantities that are involved in a particular phenomenon that one wishes to understand and characterize. It was used for the first time (Pesic 2005) in this way in 1872 by Lord Rayleigh, who was trying to understand why the sky is blue. Rayleigh first published the technique in his 1877 book The Theory of Sound.The original meaning of the word dimension, in Fourier's Theorie de la Chaleur, was the numerical value of the exponents of the base units."}]}, {"question": "What is the use of matrix factorization", "positive_ctxs": [{"text": "Matrix factorization is a class of collaborative filtering algorithms used in recommender systems. Matrix factorization algorithms work by decomposing the user-item interaction matrix into the product of two lower dimensionality rectangular matrices."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "A factorization system for a category also gives rise to a notion of embedding. If (E, M) is a factorization system, then the morphisms in M may be regarded as the embeddings, especially when the category is well powered with respect to M. Concrete theories often have a factorization system in which M consists of the embeddings in the previous sense. This is the case of the majority of the examples given in this article."}, {"text": "A factorization system for a category also gives rise to a notion of embedding. If (E, M) is a factorization system, then the morphisms in M may be regarded as the embeddings, especially when the category is well powered with respect to M. Concrete theories often have a factorization system in which M consists of the embeddings in the previous sense. This is the case of the majority of the examples given in this article."}, {"text": "It is possible to use the SVD of a square matrix A to determine the orthogonal matrix O closest to A. The closeness of fit is measured by the Frobenius norm of O \u2212 A. The solution is the product UV*."}, {"text": "It is possible to use the SVD of a square matrix A to determine the orthogonal matrix O closest to A. The closeness of fit is measured by the Frobenius norm of O \u2212 A. The solution is the product UV*."}, {"text": "Non-negative matrix factorization (NMF or NNMF), also non-negative matrix approximation is a group of algorithms in multivariate analysis and linear algebra where a matrix V is factorized into (usually) two matrices W and H, with the property that all three matrices have no negative elements. This non-negativity makes the resulting matrices easier to inspect. Also, in applications such as processing of audio spectrograms or muscular activity, non-negativity is inherent to the data being considered."}, {"text": "Non-negative matrix factorization (NMF or NNMF), also non-negative matrix approximation is a group of algorithms in multivariate analysis and linear algebra where a matrix V is factorized into (usually) two matrices W and H, with the property that all three matrices have no negative elements. This non-negativity makes the resulting matrices easier to inspect. Also, in applications such as processing of audio spectrograms or muscular activity, non-negativity is inherent to the data being considered."}]}, {"question": "What if intercept is not significant in regression", "positive_ctxs": [{"text": "So, a highly significant intercept in your model is generally not a problem. By the same token, if the intercept is not significant you usually would not want to remove it from the model because by doing this you are creating a model that says that the response function must be zero when the predictors are all zero."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "To find out if the mean salaries of the teachers in the North and South are statistically different from that of the teachers in the West (the comparison category), we have to find out if the slope coefficients of the regression result are statistically significant. For this, we need to consider the p values. The estimated slope coefficient for the North is not statistically significant as its p value is 23 percent; however, that of the South is statistically significant at the 5% level as its p value is only around 3.5 percent."}, {"text": "To find out if the mean salaries of the teachers in the North and South are statistically different from that of the teachers in the West (the comparison category), we have to find out if the slope coefficients of the regression result are statistically significant. For this, we need to consider the p values. The estimated slope coefficient for the North is not statistically significant as its p value is 23 percent; however, that of the South is statistically significant at the 5% level as its p value is only around 3.5 percent."}, {"text": "For example, a sample Pearson correlation coefficient of 0.01 is statistically significant if the sample size is 1000. Reporting only the significant p-value from this analysis could be misleading if a correlation of 0.01 is too small to be of interest in a particular application."}, {"text": "The expected values can be interpreted as follows: The mean salary of public school teachers in the West is equal to the intercept term \u03b11 in the multiple regression equation and the differential intercept coefficients, \u03b12 and \u03b13, explain by how much the mean salaries of teachers in the North and South Regions vary from that of the teachers in the West. Thus, the mean salaries of teachers in the North and South is compared against the mean salary of the teachers in the West. Hence, the West Region becomes the base group or the benchmark group,i.e., the group against which the comparisons are made."}, {"text": "The expected values can be interpreted as follows: The mean salary of public school teachers in the West is equal to the intercept term \u03b11 in the multiple regression equation and the differential intercept coefficients, \u03b12 and \u03b13, explain by how much the mean salaries of teachers in the North and South Regions vary from that of the teachers in the West. Thus, the mean salaries of teachers in the North and South is compared against the mean salary of the teachers in the West. Hence, the West Region becomes the base group or the benchmark group,i.e., the group against which the comparisons are made."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}]}, {"question": "When would you use a hierarchical model", "positive_ctxs": [{"text": "In a nutshell, hierarchical linear modeling is used when you have nested data; hierarchical regression is used to add or remove variables from your model in multiple steps. Knowing the difference between these two seemingly similar terms can help you determine the most appropriate analysis for your study."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "As an example, suppose a linear prediction model learns from some data (perhaps primarily drawn from large beaches) that a 10 degree temperature decrease would lead to 1,000 fewer people visiting the beach. This model is unlikely to generalize well over different sized beaches. More specifically, the problem is that if you use the model to predict the new attendance with a temperature drop of 10 for a beach that regularly receives 50 beachgoers, you would predict an impossible attendance value of \u2212950."}, {"text": "As an example, suppose a linear prediction model learns from some data (perhaps primarily drawn from large beaches) that a 10 degree temperature decrease would lead to 1,000 fewer people visiting the beach. This model is unlikely to generalize well over different sized beaches. More specifically, the problem is that if you use the model to predict the new attendance with a temperature drop of 10 for a beach that regularly receives 50 beachgoers, you would predict an impossible attendance value of \u2212950."}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "\"Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday \u2013 and thus be able to change, say, Saturday's model before Saturday arrives."}, {"text": "\"Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday \u2013 and thus be able to change, say, Saturday's model before Saturday arrives."}, {"text": "From 1912 to 1934 Gosset and Fisher would exchange more than 150 letters. In 1924, Gosset wrote in a letter to Fisher, \"I am sending you a copy of Student's Tables as you are the only man that's ever likely to use them!\" Fisher believed that Gosset had effected a \"logical revolution\"."}]}, {"question": "What is true error rate", "positive_ctxs": [{"text": "The true error rate is statistically defined as the error rate of the classifier on a large number of new cases that converge in the limit to the actual population distribution. It turns out that there are a number of ways of presenting sample cases to a classifier to get better estimates of the true error rate."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Recall in this context is also referred to as the true positive rate or sensitivity, and precision is also referred to as positive predictive value (PPV); other related measures used in classification include true negative rate and accuracy. True negative rate is also called specificity."}, {"text": "Recall in this context is also referred to as the true positive rate or sensitivity, and precision is also referred to as positive predictive value (PPV); other related measures used in classification include true negative rate and accuracy. True negative rate is also called specificity."}, {"text": "Recall in this context is also referred to as the true positive rate or sensitivity, and precision is also referred to as positive predictive value (PPV); other related measures used in classification include true negative rate and accuracy. True negative rate is also called specificity."}, {"text": "Recall in this context is also referred to as the true positive rate or sensitivity, and precision is also referred to as positive predictive value (PPV); other related measures used in classification include true negative rate and accuracy. True negative rate is also called specificity."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The type I error rate is often associated with the a-priori setting of the significance level by the researcher: the significance level represents an acceptable error rate considering that all null hypotheses are true (the \"global null\" hypothesis). The choice of a significance level may thus be somewhat arbitrary (i.e. setting 10% (0.1), 5% (0.05), 1% (0.01) etc."}, {"text": "The type I error rate is often associated with the a-priori setting of the significance level by the researcher: the significance level represents an acceptable error rate considering that all null hypotheses are true (the \"global null\" hypothesis). The choice of a significance level may thus be somewhat arbitrary (i.e. setting 10% (0.1), 5% (0.05), 1% (0.01) etc."}]}, {"question": "How does SHOT learning work", "positive_ctxs": [{"text": "One-shot learning is a classification task where one example (or a very small number of examples) is given for each class, that is used to prepare a model, that in turn must make predictions about many unknown examples in the future."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "How much does the ball cost?\" many subjects incorrectly answer $0.10. An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do."}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}]}, {"question": "Is keras a part of TensorFlow", "positive_ctxs": [{"text": "Keras is a high-level interface and uses Theano or Tensorflow for its backend. It runs smoothly on both CPU and GPU. Keras supports almost all the models of a neural network \u2013 fully connected, convolutional, pooling, recurrent, embedding, etc. Furthermore, these models can be combined to build more complex models."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "As TensorFlow's market share among research papers was declining to the advantage of PyTorch TensorFlow Team announced a release of a new major version of the library in September 2019. TensorFlow 2.0 introduced many changes, the most significant being TensorFlow eager, which changed the automatic differentiation scheme from the static computational graph, to the \"Define-by-Run\" scheme originally made popular by Chainer and later PyTorch. Other major changes included removal of old libraries, cross-compatibility between trained models on different versions of TensorFlow, and significant improvements to the performance on GPU."}, {"text": "As TensorFlow's market share among research papers was declining to the advantage of PyTorch TensorFlow Team announced a release of a new major version of the library in September 2019. TensorFlow 2.0 introduced many changes, the most significant being TensorFlow eager, which changed the automatic differentiation scheme from the static computational graph, to the \"Define-by-Run\" scheme originally made popular by Chainer and later PyTorch. Other major changes included removal of old libraries, cross-compatibility between trained models on different versions of TensorFlow, and significant improvements to the performance on GPU."}, {"text": "In May 2017, Google announced a software stack specifically for mobile development, TensorFlow Lite. In January 2019, TensorFlow team released a developer preview of the mobile GPU inference engine with OpenGL ES 3.1 Compute Shaders on Android devices and Metal Compute Shaders on iOS devices. In May 2019, Google announced that their TensorFlow Lite Micro (also known as TensorFlow Lite for Microcontrollers) and ARM's uTensor would be merging.TensorFlow Lite uses FlatBuffers as the data serialization format for network models, eschewing the Protocol Buffers format used by standard TensorFlow models."}, {"text": "In May 2017, Google announced a software stack specifically for mobile development, TensorFlow Lite. In January 2019, TensorFlow team released a developer preview of the mobile GPU inference engine with OpenGL ES 3.1 Compute Shaders on Android devices and Metal Compute Shaders on iOS devices. In May 2019, Google announced that their TensorFlow Lite Micro (also known as TensorFlow Lite for Microcontrollers) and ARM's uTensor would be merging.TensorFlow Lite uses FlatBuffers as the data serialization format for network models, eschewing the Protocol Buffers format used by standard TensorFlow models."}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google specifically for neural network machine learning, particularly using Google's own TensorFlow software. Google began using TPUs internally in 2015, and in 2018 made them available for third party use, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale."}]}, {"question": "What are the disadvantages of sampling", "positive_ctxs": [{"text": "Disadvantages of Sampling Since choice of sampling method is a judgmental task, there exist chances of biasness as per the mindset of the person who chooses it. Improper selection of sampling techniques may cause the whole process to defunct. Selection of proper size of samples is a difficult job."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Even though convenience sampling can be easy to obtain, its disadvantages usually outweigh the advantages. This sampling technique may be more appropriate for one type of study and less for another."}, {"text": "Some researchers have used search engines to construct sampling frames. This technique has disadvantages because search engine results are unsystematic and non-random making them unreliable for obtaining an unbiased sample. The sampling frame issue can be circumvented by using an entire population of interest, such as tweets by particular Twitter users or online archived content of certain newspapers as the sampling frame."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "One of the notorious disadvantages of BoW is that it ignores the spatial relationships among the patches, which are very important in image representation. Researchers have proposed several methods to incorporate the spatial information. For feature level improvements, correlogram features can capture spatial co-occurrences of features."}, {"text": "Another model that was developed to offset the disadvantages of the LPM is the probit model. The probit model uses the same approach to non-linearity as does the logit model; however, it uses the normal CDF instead of the logistic CDF."}, {"text": "Another model that was developed to offset the disadvantages of the LPM is the probit model. The probit model uses the same approach to non-linearity as does the logit model; however, it uses the normal CDF instead of the logistic CDF."}]}, {"question": "Is neural network a linear classifier", "positive_ctxs": [{"text": "Perceptron is a single layer neural network and a multi-layer perceptron is called Neural Networks. Perceptron is a linear classifier (binary). Also, it is used in supervised learning. It helps to classify the given input data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "is a set of weights. The optimization problem of finding alpha is readily solved through neural networks, hence a \"meta-network\" where each \"neuron\" is in fact an entire neural network can be trained, and the synaptic weights of the final network is the weight applied to each expert. This is known as a linear combination of experts.It can be seen that most forms of neural networks are some subset of a linear combination: the standard neural net (where only one expert is used) is simply a linear combination with all"}, {"text": "Some researchers have achieved \"near-human performance\" on the MNIST database, using a committee of neural networks; in the same paper, the authors achieve performance double that of humans on other recognition tasks. The highest error rate listed on the original website of the database is 12 percent, which is achieved using a simple linear classifier with no preprocessing.In 2004, a best-case error rate of 0.42 percent was achieved on the database by researchers using a new classifier called the LIRA, which is a neural classifier with three neuron layers based on Rosenblatt's perceptron principles.Some researchers have tested artificial intelligence systems using the database put under random distortions. The systems in these cases are usually neural networks and the distortions used tend to be either affine distortions or elastic distortions."}, {"text": "A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus a neural network is either a biological neural network, made up of real biological neurons, or an artificial neural network, for solving artificial intelligence (AI) problems. The connections of the biological neuron are modeled as weights."}, {"text": "In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control."}, {"text": "(2017) proposed elastic weight consolidation (EWC), a method to sequentially train a single artificial neural network on multiple tasks. This technique supposes that some weights of the trained neural network are more important for previously learned tasks than others. During training of the neural network on a new task, changes to the weights of the network are made less likely the greater their importance."}, {"text": "LeNet is a convolutional neural network structure proposed by Yann LeCun et al. In general, LeNet refers to lenet-5 and is a simple convolutional neural network. Convolutional neural networks are a kind of feed-forward neural network whose artificial neurons can respond to a part of the surrounding cells in the coverage range and perform well in large-scale image processing."}, {"text": "A feedforward neural network is an artificial neural network wherein connections between the nodes do not form a cycle. As such, it is different from its descendant: recurrent neural networks."}]}, {"question": "What is AUC score in machine learning", "positive_ctxs": [{"text": "AUC represents the probability that a random positive (green) example is positioned to the right of a random negative (red) example. AUC ranges in value from 0 to 1. A model whose predictions are 100% wrong has an AUC of 0.0; one whose predictions are 100% correct has an AUC of 1.0. AUC is scale-invariant."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The hinge loss is a convex function, so many of the usual convex optimizers used in machine learning can work with it. It is not differentiable, but has a subgradient with respect to model parameters w of a linear SVM with score function"}, {"text": "The hinge loss is a convex function, so many of the usual convex optimizers used in machine learning can work with it. It is not differentiable, but has a subgradient with respect to model parameters w of a linear SVM with score function"}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}]}, {"question": "What is uniform random permutation", "positive_ctxs": [{"text": "Def: A uniform random permutation is one in which each of the n! possible permutations are equally likely. Def Given a set of n elements, a k-permutation is a sequence containing k of the n elements."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The mathematical statement of this problem is as follows: pick a random permutation on n elements and k values from the range 1 to n, also at random, call these marks. What is the probability that there is at least one mark on every cycle of the permutation? The claim is this probability is k/n."}, {"text": "A random permutation is a random ordering of a set of objects, that is, a permutation-valued random variable. The use of random permutations is often fundamental to fields that use randomized algorithms such as coding theory, cryptography, and simulation. A good example of a random permutation is the shuffling of a deck of cards: this is ideally a random permutation of the 52 cards."}, {"text": "One method of generating a random permutation of a set of length n uniformly at random (i.e., each of the n! permutations is equally likely to appear) is to generate a sequence by taking a random number between 1 and n sequentially, ensuring that there is no repetition, and interpreting this sequence (x1, ..., xn) as the permutation"}, {"text": "This means that the expected number of cycles of size m in a permutation of length n less than m is zero (obviously). A random permutation of length at least m contains on average 1/m cycles of length m. In particular, a random permutation contains about one fixed point."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "where Hm is the mth harmonic number. Hence the expected number of cycles of length at most m in a random permutation is about ln m."}, {"text": "Sampling from a truncated exponential random variable is straightforward. Just take the log of a uniform random variable (with appropriate interval and corresponding truncation)."}]}, {"question": "How does sample size effect standard error", "positive_ctxs": [{"text": "The standard error is also inversely proportional to the sample size; the larger the sample size, the smaller the standard error because the statistic will approach the actual value. The standard error is considered part of descriptive statistics. It represents the standard deviation of the mean within a dataset."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The reliability of the sample mean is estimated using the standard error, which in turn is calculated using the variance of the sample. If the sample is random, the standard error falls with the size of the sample and the sample mean's distribution approaches the normal distribution as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "is the common standard deviation of the outcomes in the treated and control groups. If constructed appropriately, a standardized effect size, along with the sample size, will completely determine the power. An unstandardized (direct) effect size is rarely sufficient to determine the power, as it does not contain information about the variability in the measurements."}, {"text": "to account for the added precision gained by sampling close to a larger percentage of the population. The effect of the FPC is that the error becomes zero when the sample size n is equal to the population size N."}, {"text": "to account for the added precision gained by sampling close to a larger percentage of the population. The effect of the FPC is that the error becomes zero when the sample size n is equal to the population size N."}]}, {"question": "What does low test retest reliability mean", "positive_ctxs": [{"text": "Therefore, a low test\u2013retest reliability correlation might be indicative of a measure with low reliability, of true changes in the persons being measured, or both. That is, in the test\u2013retest method of estimating reliability, it is not possible to separate the reliability of measure from its stability."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Carryover effect, particularly if the interval between test and retest is short. When retested, people may remember their original answer, which could affect answers on the second administration."}, {"text": "Face validity is an estimate of whether a test appears to measure a certain criterion; it does not guarantee that the test actually measures phenomena in that domain. Measures may have high validity, but when the test does not appear to be measuring what it is, it has low face validity. Indeed, when a test is subject to faking (malingering), low face validity might make the test more valid."}, {"text": "The central assumption of reliability theory is that measurement errors are essentially random. This does not mean that errors arise from random processes. For any individual, an error in measurement is not a completely random event."}, {"text": "A hypothetical ideal \"gold standard\" test has a sensitivity of 100% with respect to the presence of the disease (it identifies all individuals with a well defined disease process; it does not have any false-negative results) and a specificity of 100% (it does not falsely identify someone with a condition that does not have the condition; it does not have any false-positive results). In practice, there are sometimes no true gold standard tests.As new diagnostic methods become available, the \"gold standard\" test may change over time. For instance, for the diagnosis of aortic dissection, the gold standard test used to be the aortogram, which had a sensitivity as low as 83% and a specificity as low as 87%."}, {"text": "The Z-test tells us that the 55 students of interest have an unusually low mean test score compared to most simple random samples of similar size from the population of test-takers. A deficiency of this analysis is that it does not consider whether the effect size of 4 points is meaningful. If instead of a classroom, we considered a subregion containing 900 students whose mean score was 99, nearly the same z-score and p-value would be observed."}, {"text": "Despite the apparent high accuracy of the test, the incidence of the disease is so low that the vast majority of patients who test positive do not have the disease. Nonetheless, the fraction of patients who test positive who do have the disease (0.019) is 19 times the fraction of people who have not yet taken the test who have the disease (0.001). Thus the test is not useless, and re-testing may improve the reliability of the result."}, {"text": "While reliability does not imply validity, reliability does place a limit on the overall validity of a test. A test that is not perfectly reliable cannot be perfectly valid, either as a means of measuring attributes of a person or as a means of predicting scores on a criterion. While a reliable test may provide useful valid information, a test that is not reliable cannot possibly be valid.For example, if a set of weighing scales consistently measured the weight of an object as 500 grams over the true weight, then the scale would be very reliable, but it would not be valid (as the returned weight is not the true weight)."}]}, {"question": "What are the steps involved in Bayesian data analysis", "positive_ctxs": [{"text": "2.1 Steps of Bayesian Data Analysis Choose a statistical model for the data in relation to the research questions. The model should have good theoretical justification and have parameters that are meaningful for the research questions. Obtain the posterior distributions for the model parameters."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A data analytics approach can be used in order to predict energy consumption in buildings. The different steps of the data analysis process are carried out in order to realise smart buildings, where the building management and control operations including heating, ventilation, air conditioning, lighting and security are realised automatically by miming the needs of the building users and optimising resources like energy and time."}, {"text": "Since the in the GNG input data is presented sequentially one by one, the following steps are followed at each iteration:"}, {"text": "We can perform a Data editing and change the Sex of the Adult by knowing that the Adult is Pregnant we can make the assumption that the Adult is Female and make changes accordingly. We edit the dataset to have a clearer analysis of the data when performing data manipulation in the later steps within the data mining process."}, {"text": "Is it the symmetry of the physical body that forces are acting upon, or to the points, lines or areas at which forces are being applied? What if more than one body is involved with different symmetries?"}, {"text": "An additional set of cases occurs in Bayesian linear regression, where in the basic model the data is assumed to be normally distributed, and normal priors are placed on the regression coefficients. The resulting analysis is similar to the basic cases of independent identically distributed data.The formulas for the non-linear-regression cases are summarized in the conjugate prior article."}, {"text": "An additional set of cases occurs in Bayesian linear regression, where in the basic model the data is assumed to be normally distributed, and normal priors are placed on the regression coefficients. The resulting analysis is similar to the basic cases of independent identically distributed data.The formulas for the non-linear-regression cases are summarized in the conjugate prior article."}, {"text": "An additional set of cases occurs in Bayesian linear regression, where in the basic model the data is assumed to be normally distributed, and normal priors are placed on the regression coefficients. The resulting analysis is similar to the basic cases of independent identically distributed data.The formulas for the non-linear-regression cases are summarized in the conjugate prior article."}]}, {"question": "How does a regression tree work", "positive_ctxs": [{"text": "A regression tree is built through a process known as binary recursive partitioning, which is an iterative process that splits the data into partitions or branches, and then continues splitting each partition into smaller groups as the method moves up each branch."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "How much does the ball cost?\" many subjects incorrectly answer $0.10. An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do."}, {"text": "In computer science, a logistic model tree (LMT) is a classification model with an associated supervised training algorithm that combines logistic regression (LR) and decision tree learning.Logistic model trees are based on the earlier idea of a model tree: a decision tree that has linear regression models at its leaves to provide a piecewise linear regression model (where ordinary decision trees with constants at their leaves would produce a piecewise constant model). In the logistic variant, the LogitBoost algorithm is used to produce an LR model at every node in the tree; the node is then split using the C4.5 criterion. Each LogitBoost invocation is warm-started from its results in the parent node."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}]}, {"question": "What do you mean by reliability", "positive_ctxs": [{"text": "Quality Glossary Definition: Reliability. Reliability is defined as the probability that a product, system, or service will perform its intended function adequately for a specified period of time, or will operate in a defined environment without failure."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What is more there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.A simple example of modus ponens often used in introductory logic books is \"If you are human then you are mortal\". This can be represented in pseudocode as:"}, {"text": "You are allowed to select k of these n boxes all at once and break them open simultaneously, gaining access to k keys. What is the probability that using these keys you can open all n boxes, where you use a found key to open the box it belongs to and repeat."}, {"text": "Aspect is unusual in ASL in that transitive verbs derived for aspect lose their transitivity. That is, while you can sign 'dog chew bone' for the dog chewed on a bone, or 'she look-at me' for she looked at me, you cannot do the same in the durative to mean the dog gnawed on the bone or she stared at me. Instead, you must use other strategies, such as a topic construction (see below) to avoid having an object for the verb."}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "It is also possible that no mean exists. Consider a color wheel\u2014there is no mean to the set of all colors. In these situations, you must decide which mean is most useful."}, {"text": "It is also possible that no mean exists. Consider a color wheel\u2014there is no mean to the set of all colors. In these situations, you must decide which mean is most useful."}]}, {"question": "How do you prove that two distributions are independent", "positive_ctxs": [{"text": "You can tell if two random variables are independent by looking at their individual probabilities. If those probabilities don't change when the events meet, then those variables are independent. Another way of saying this is that if the two variables are correlated, then they are not independent."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Independence conditions are rules for deciding whether two variables are independent of each other. Variables are independent if the values of one do not directly affect the values of the other. Multiple causal models can share independence conditions."}, {"text": "The approximation has the basic property that it is a factorized distribution, i.e. a product of two or more independent distributions over disjoint subsets of the unobserved variables."}, {"text": "Recently there are two results that described here include the learning Poisson binomial distributions and learning sums of independent integer random variables. All the results below hold using the total variation distance as a distance measure."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "Why was AlphaGo able to play go so well", "positive_ctxs": [{"text": "The original AlphaGo demonstrated superhuman Go-playing ability, but needed the expertise of human players to get there. Namely, it used a dataset of more than 100,000 Go games as a starting point for its own knowledge. AlphaGo Zero, by comparison, has only been programmed with the basic rules of Go."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "On 5 December 2017, DeepMind team released a preprint on arXiv, introducing AlphaZero, a program using generalized AlphaGo Zero's approach, which achieved within 24 hours a superhuman level of play in chess, shogi, and Go, defeating world-champion programs, Stockfish, Elmo, and 3-day version of AlphaGo Zero in each case.AlphaZero (AZ) is a more generalized variant of the AlphaGo Zero (AGZ) algorithm, and is able to play shogi and chess as well as Go. Differences between AZ and AGZ include:"}, {"text": "Demis Hassabis, the co-founder and CEO of DeepMind, said that AlphaGo Zero was so powerful because it was \"no longer constrained by the limits of human knowledge\". David Silver, one of the first authors of DeepMind's papers published in Nature on AlphaGo, said that it is possible to have generalised AI algorithms by removing the need to learn from humans.Google later developed AlphaZero, a generalized version of AlphaGo Zero that could play chess and Sh\u014dgi in addition to Go. In December 2017, AlphaZero beat the 3-day version of AlphaGo Zero by winning 60 games to 40, and with 8 hours of training it outperformed AlphaGo Lee on an Elo scale."}, {"text": "\"In China, AlphaGo was a \"Sputnik moment\" which helped convince the Chinese government to prioritize and dramatically increase funding for artificial intelligence.In 2017, the DeepMind AlphaGo team received the inaugural IJCAI Marvin Minsky medal for Outstanding Achievements in AI. \u201cAlphaGo is a wonderful achievement, and a perfect example of what the Minsky Medal was initiated to recognise\u201d, said Professor Michael Wooldridge, Chair of the IJCAI Awards Committee. \u201cWhat particularly impressed IJCAI was that AlphaGo achieves what it does through a brilliant combination of classic AI techniques as well as the state-of-the-art machine learning techniques that DeepMind is so closely associated with."}, {"text": "Mark Pesce of the University of Sydney called AlphaGo Zero \"a big technological advance\" taking us into \"undiscovered territory\".Gary Marcus, a psychologist at New York University, has cautioned that for all we know, AlphaGo may contain \"implicit knowledge that the programmers have about how to construct machines to play problems like Go\" and will need to be tested in other domains before being sure that its base architecture is effective at much more than playing Go. In contrast, DeepMind is \"confident that this approach is generalisable to a large number of domains\".In response to the reports, South Korean Go professional Lee Sedol said, \"The previous version of AlphaGo wasn\u2019t perfect, and I believe that\u2019s why AlphaGo Zero was made.\""}, {"text": "AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves. Once it had reached a certain degree of proficiency, it was trained further by being set to play large numbers of games against other instances of itself, using reinforcement learning to improve its play. To avoid \"disrespectfully\" wasting its opponent's time, the program is specifically programmed to resign if its assessment of win probability falls beneath a certain threshold; for the match against Lee, the resignation threshold was set to 20%."}, {"text": "Facebook's AI lab performs tasks such as automatically tagging uploaded pictures with the names of the people in them.Google's DeepMind Technologies developed a system capable of learning how to play Atari video games using only pixels as data input. In 2015 they demonstrated their AlphaGo system, which learned the game of Go well enough to beat a professional Go player. Google Translate uses a neural network to translate between more than 100 languages."}, {"text": "Facebook's AI lab performs tasks such as automatically tagging uploaded pictures with the names of the people in them.Google's DeepMind Technologies developed a system capable of learning how to play Atari video games using only pixels as data input. In 2015 they demonstrated their AlphaGo system, which learned the game of Go well enough to beat a professional Go player. Google Translate uses a neural network to translate between more than 100 languages."}]}, {"question": "What is the difference between false positive and false negative", "positive_ctxs": [{"text": "A false positive means that the results say you have the condition you were tested for, but you really don't. With a false negative, the results say you don't have a condition, but you really do."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, when performing multiple comparisons, a false positive ratio (also known as fall-out or false alarm ratio) is the probability of falsely rejecting the null hypothesis for a particular test. The false positive rate is calculated as the ratio between the number of negative events wrongly categorized as positive (false positives) and the total number of actual negative events (regardless of classification)."}, {"text": "In statistics, when performing multiple comparisons, a false positive ratio (also known as fall-out or false alarm ratio) is the probability of falsely rejecting the null hypothesis for a particular test. The false positive rate is calculated as the ratio between the number of negative events wrongly categorized as positive (false positives) and the total number of actual negative events (regardless of classification)."}, {"text": "A false positive error is a type I error where the test is checking a single condition, and wrongly gives an affirmative (positive) decision. However it is important to distinguish between the type 1 error rate and the probability of a positive result being false. The latter is known as the false positive risk (see Ambiguity in the definition of false positive rate, below)."}, {"text": "A false positive error is a type I error where the test is checking a single condition, and wrongly gives an affirmative (positive) decision. However it is important to distinguish between the type 1 error rate and the probability of a positive result being false. The latter is known as the false positive risk (see Ambiguity in the definition of false positive rate, below)."}, {"text": "A false positive error is a type I error where the test is checking a single condition, and wrongly gives an affirmative (positive) decision. However it is important to distinguish between the type 1 error rate and the probability of a positive result being false. The latter is known as the false positive risk (see Ambiguity in the definition of false positive rate, below)."}, {"text": "A false positive error is a type I error where the test is checking a single condition, and wrongly gives an affirmative (positive) decision. However it is important to distinguish between the type 1 error rate and the probability of a positive result being false. The latter is known as the false positive risk (see Ambiguity in the definition of false positive rate, below)."}, {"text": "A false positive is an error in binary classification in which a test result incorrectly indicates the presence of a condition such as a disease when the disease is not present, while a false negative is the opposite error where the test result incorrectly fails to indicate the presence of a condition when it is present. These are the two kinds of errors in a binary test, in contrast to the two kinds of correct result (a true positive and a true negative.) They are also known in medicine as a false positive (or false negative) diagnosis, and in statistical classification as a false positive (or false negative) error.In statistical hypothesis testing the analogous concepts are known as type I and type II errors, where a positive result corresponds to rejecting the null hypothesis, and a negative result corresponds to not rejecting the null hypothesis."}]}, {"question": "Why does bootstrap work in machine learning", "positive_ctxs": [{"text": "The bootstrap method is a resampling technique used to estimate statistics on a population by sampling a dataset with replacement. It is used in applied machine learning to estimate the skill of machine learning models when making predictions on data not included in the training data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W. Rosenbluth.The use of Sequential Monte Carlo in advanced signal processing and Bayesian inference is more recent. It was in 1993, that Gordon et al., published in their seminal work the first application of a Monte Carlo resampling algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state-space or the noise of the system."}, {"text": "pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W. Rosenbluth.The use of Sequential Monte Carlo in advanced signal processing and Bayesian inference is more recent. It was in 1993, that Gordon et al., published in their seminal work the first application of a Monte Carlo resampling algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state-space or the noise of the system."}, {"text": "In April 1993, Gordon et al., published in their seminal work an application of genetic type algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state-space or the noise of the system. Independently, the ones by Pierre Del Moral and Himilcon Carvalho, Pierre Del Moral, Andr\u00e9 Monin and G\u00e9rard Salut on particle filters published in the mid-1990s."}, {"text": "Bootstrap aggregating, also called bagging (from bootstrap aggregating), is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression. It also reduces variance and helps to avoid overfitting. Although it is usually applied to decision tree methods, it can be used with any type of method."}, {"text": "Bootstrap aggregating, also called bagging (from bootstrap aggregating), is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification and regression. It also reduces variance and helps to avoid overfitting. Although it is usually applied to decision tree methods, it can be used with any type of method."}, {"text": "is the estimated standard error of the coefficient in the original model.The studentized test enjoys optimal properties as the statistic that is bootstrapped is pivotal (i.e. it does not depend on nuisance parameters as the t-test follows asymptotically a N(0,1) distribution), unlike the percentile bootstrap.Bias-corrected bootstrap \u2013 adjusts for bias in the bootstrap distribution."}, {"text": "machine learning problems and algorithms. Synonyms include formal learning theory and algorithmic inductive inference. Algorithmic learning theory is different from statistical learning theory in that it does not make use of statistical assumptions and analysis."}]}, {"question": "What do you do with an unbalanced data set", "positive_ctxs": [{"text": "7 Techniques to Handle Imbalanced DataUse the right evaluation metrics. Resample the training set. Use K-fold Cross-Validation in the right way. Ensemble different resampled datasets. Resample with different ratios. Cluster the abundant class. Design your own models."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The do calculus is the set of manipulations that are available to transform one expression into another, with the general goal of transforming expressions that contain the do operator into expressions that do not. Expressions that do not include the do operator can be estimated from observational data alone, without the need for an experimental intervention, which might be expensive, lengthy or even unethical (e.g., asking subjects to take up smoking). The set of rules is complete (it can be used to derive every true statement in this system)."}, {"text": "But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influence a child's cognitive functioning?). Then you can still do research, but it is not causal, it is correlational."}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}]}, {"question": "What does TF IDF stand for", "positive_ctxs": [{"text": "frequency\u2013inverse document frequency"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In set theory, it may stand for \"the power set of x\". In arithmetic, g(x,y) may stand for \"x+y\". In set theory, it may stand for \"the union of x and y\"."}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Function symbols of valence 0 are called constant symbols, and are often denoted by lowercase letters at the beginning of the alphabet such as a, b and c. The symbol a may stand for Socrates. In arithmetic, it may stand for 0. In set theory, such a constant may stand for the empty set.The traditional approach can be recovered in the modern approach, by simply specifying the \"custom\" signature to consist of the traditional sequences of non-logical symbols."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}]}, {"question": "What are the different theories of decision making", "positive_ctxs": [{"text": "Descriptive, prescriptive, and normative are three main areas of decision theory and each studies a different type of decision making."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Poole\u2019s multiple sequence model states that different groups make decisions through the application of different sequences. This model rejects the idea that decision making occurs in separate, succinct phases, as other rational phase models suggest. Rather, Poole theorized that decision making occurs in clusters of linking communication."}, {"text": "Info-gap decision theory is radically different from all current theories of decision under uncertainty. The difference originates in the modelling of uncertainty as an information gap rather than as a probability."}, {"text": "A more general criticism of decision making under uncertainty is the impact of outsized, unexpected events, ones that are not captured by the model. This is discussed particularly in black swan theory, and info-gap, used in isolation, is vulnerable to this, as are a fortiori all decision theories that use a fixed universe of possibilities, notably probabilistic ones."}, {"text": "Ben-Haim (2006, p.xii) claims that info-gap is \"radically different from all current theories of decision under uncertainty,\" while Sniedovich argues that info-gap's robustness analysis is precisely maximin analysis of the horizon of uncertainty. By contrast, Ben-Haim states (Ben-Haim 1999, pp. 271\u20132) that \"robust reliability is emphatically not a [min-max] worst-case analysis\".Sniedovich has challenged the validity of info-gap theory for making decisions under severe uncertainty."}, {"text": "The multiple sequence model defines different contingency variables such as group composition, task structure, and conflict management approaches, which all affect group decision making. This model consists of 36 clusters for coding group communication and four cluster-sets, such as proposal growth, conflict, socio-emotional interests, and expressions of uncertainty. By coding group decision making processes, Poole identified a set of decision paths that are usually used by groups during decision making processes.This theory also consists of various tracks that define different stages of interpersonal communication, problem solving, and decision making that occur in group communication."}, {"text": "Ben-Haim 2001, 2006) as a new non-probabilistic theory that is radically different from all current decision theories for decision under uncertainty. So, it is imperative to examine in this discussion in what way, if any, is info-gap's robustness model radically different from Maximin. For one thing, there is a well-established assessment of the utility of Maximin."}, {"text": "Poole\u2019s Multiple Sequence Model is a communication theory approach developed by Marshall Scott Poole in 1983. The model focuses on decision making processes in groups, and rejects other widely held communication theories in favor of less linear decision making processes. The multiple sequence model suggests that group activity needs a developing and changing development of communication."}]}, {"question": "What is an example of positive feedback", "positive_ctxs": [{"text": "Positive feedback occurs to increase the change or output: the result of a reaction is amplified to make it occur more quickly. Some examples of positive feedback are contractions in child birth and the ripening of fruit; negative feedback examples include the regulation of blood glucose levels and osmoregulation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A classic example of positive feedback is the lac operon in E. coli. Positive feedback plays an integral role in cellular differentiation, development, and cancer progression, and therefore, positive feedback in gene regulation can have significant physiological consequences. Random motions in molecular dynamics coupled with positive feedback can trigger interesting effects, such as create population of phenotypically different cells from the same parent cell."}, {"text": "A familiar example of positive feedback is the loud squealing or howling sound produced by audio feedback in public address systems: the microphone picks up sound from its own loudspeakers, amplifies it, and sends it through the speakers again."}, {"text": "A simple feedback loop is shown in the diagram. If the loop gain AB is positive, then a condition of positive or regenerative feedback exists."}, {"text": "A key feature of positive feedback is thus that small disturbances get bigger. When a change occurs in a system, positive feedback causes further change, in the same direction."}, {"text": "Another sociological example of positive feedback is the network effect. When more people are encouraged to join a network this increases the reach of the network therefore the network expands ever more quickly. A viral video is an example of the network effect in which links to a popular video are shared and redistributed, ensuring that more people see the video and then re-publish the links."}, {"text": "A self-fulfilling prophecy is a social positive feedback loop between beliefs and behavior: if enough people believe that something is true, their behavior can make it true, and observations of their behavior may in turn increase belief. A classic example is a bank run."}, {"text": "The difference between positive and negative feedback for AC signals is one of phase: if the signal is fed back out of phase, the feedback is negative and if it is in phase the feedback is positive. One problem for amplifier designers who use negative feedback is that some of the components of the circuit will introduce phase shift in the feedback path. If there is a frequency (usually a high frequency) where the phase shift reaches 180\u00b0, then the designer must ensure that the amplifier gain at that frequency is very low (usually by low-pass filtering)."}]}, {"question": "What are the uses of moments", "positive_ctxs": [{"text": "Moments are are very useful in statistics because they tell you much about your data. There are four commonly used moments in statistics: the mean, variance, skewness, and kurtosis. The mean gives you a measure of center of the data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X \u2212 E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions."}, {"text": "The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X \u2212 E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions."}, {"text": "The delta method uses second-order Taylor expansions to approximate the variance of a function of one or more random variables: see Taylor expansions for the moments of functions of random variables. For example, the approximate variance of a function of one variable is given by"}, {"text": "The delta method uses second-order Taylor expansions to approximate the variance of a function of one or more random variables: see Taylor expansions for the moments of functions of random variables. For example, the approximate variance of a function of one variable is given by"}, {"text": "The delta method uses second-order Taylor expansions to approximate the variance of a function of one or more random variables: see Taylor expansions for the moments of functions of random variables. For example, the approximate variance of a function of one variable is given by"}, {"text": "The method of moments was introduced by Pafnuty Chebyshev in 1887 in the proof of the central limit theorem. The idea of matching empirical moments of a distribution to the population moments dates back at least to Pearson."}, {"text": "All the odd moments are zero, by \u00b1 symmetry. The even moments are the sum over all partition into pairs of the product of G(x \u2212 y) for each pair."}]}, {"question": "Is Kalman filter optimal", "positive_ctxs": [{"text": "Kalman filters combine two sources of information, the predicted states and noisy measurements, to produce optimal, unbiased estimates of system states. The filter is optimal in the sense that it minimizes the variance in the estimated states."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Though regardless of Gaussianity, if the process and measurement covariances are known, the Kalman filter is the best possible linear estimator in the minimum mean-square-error sense.Extensions and generalizations to the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. The underlying model is a hidden Markov model where the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions. Also, Kalman filter has been successfully used in multi-sensor fusion, and distributed sensor networks to develop distributed or consensus Kalman filter."}, {"text": "The PDF at the previous timestep is inductively assumed to be the estimated state and covariance. This is justified because, as an optimal estimator, the Kalman filter makes best use of the measurements, therefore the PDF for"}, {"text": "are Gaussian, the Kalman filter finds the exact Bayesian filtering distribution. If not, Kalman filter based methods are a first-order approximation (EKF) or a second-order approximation (UKF in general, but if probability distribution is Gaussian a third-order approximation is possible)."}, {"text": "The most common variants of Kalman filters for non-linear systems are the Extended Kalman Filter and Unscented Kalman filter. The suitability of which filter to use depends on the non-linearity indices of the process and observation model."}, {"text": "The Kalman filter has numerous applications in technology. A common application is for guidance, navigation, and control of vehicles, particularly aircraft, spacecraft and dynamically positioned ships. Furthermore, the Kalman filter is a widely applied concept in time series analysis used in fields such as signal processing and econometrics."}, {"text": "\u2014are highly nonlinear, the extended Kalman filter can give particularly poor performance. This is because the covariance is propagated through linearization of the underlying nonlinear model. The unscented Kalman filter (UKF) uses a deterministic sampling technique known as the unscented transformation (UT) to pick a minimal set of sample points (called sigma points) around the mean."}, {"text": "The Kalman filter is an efficient recursive filter that estimates the internal state of a linear dynamic system from a series of noisy measurements. It is used in a wide range of engineering and econometric applications from radar and computer vision to estimation of structural macroeconomic models, and is an important topic in control theory and control systems engineering. Together with the linear-quadratic regulator (LQR), the Kalman filter solves the linear\u2013quadratic\u2013Gaussian control problem (LQG)."}]}, {"question": "What is AB testing in Analytics", "positive_ctxs": [{"text": "AB testing is essentially an experiment where two or more variants of a page are shown to users at random, and statistical analysis is used to determine which variation performs better for a given conversion goal."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Let a trapezoid have vertices A, B, C, and D in sequence and have parallel sides AB and CD. Let E be the intersection of the diagonals, and let F be on side DA and G be on side BC such that FEG is parallel to AB and CD. Then FG is the harmonic mean of AB and DC."}, {"text": "A simple feedback loop is shown in the diagram. If the loop gain AB is positive, then a condition of positive or regenerative feedback exists."}, {"text": "Another approach for defining Learning Analytics is based on the concept of Analytics interpreted as the process of developing actionable insights through problem definition and the application of statistical models and analysis against existing and/or simulated future data. From this point of view, Learning Analytics emerges as a type of Analytics (as a process), in which the data, the problem definition and the insights are learning-related."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "If the functions A and B are linear and AB is smaller than unity, then the overall system gain from the input to output is finite, but can be very large as AB approaches unity. In that case, it can be shown that the overall or \"closed loop\" gain from input to output is:"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}]}, {"question": "What are some interesting applications of Monte Carlo method", "positive_ctxs": [{"text": "The technique of Monte Carlo Simulation (MCS) was originally developed for use in nuclear weapons design. It provides an efficient way to simulate processes involving chance and uncertainty and can be applied in areas as diverse as market sizing, customer lifetime value measurement and customer service management."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "There is no consensus on how Monte Carlo should be defined. For example, Ripley defines most probabilistic modeling as stochastic simulation, with Monte Carlo being reserved for Monte Carlo integration and Monte Carlo statistical tests. Sawilowsky distinguishes between a simulation, a Monte Carlo method, and a Monte Carlo simulation: a simulation is a fictitious representation of reality, a Monte Carlo method is a technique that can be used to solve a mathematical or statistical problem, and a Monte Carlo simulation uses repeated sampling to obtain the statistical properties of some phenomenon (or behavior)."}, {"text": "There is no consensus on how Monte Carlo should be defined. For example, Ripley defines most probabilistic modeling as stochastic simulation, with Monte Carlo being reserved for Monte Carlo integration and Monte Carlo statistical tests. Sawilowsky distinguishes between a simulation, a Monte Carlo method, and a Monte Carlo simulation: a simulation is a fictitious representation of reality, a Monte Carlo method is a technique that can be used to solve a mathematical or statistical problem, and a Monte Carlo simulation uses repeated sampling to obtain the statistical properties of some phenomenon (or behavior)."}, {"text": "In general, the Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers (see also Random number generation) and observing that fraction of the numbers that obeys some property or properties. The method is useful for obtaining numerical solutions to problems too complicated to solve analytically. The most common application of the Monte Carlo method is Monte Carlo integration."}, {"text": "In general, the Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers (see also Random number generation) and observing that fraction of the numbers that obeys some property or properties. The method is useful for obtaining numerical solutions to problems too complicated to solve analytically. The most common application of the Monte Carlo method is Monte Carlo integration."}, {"text": "Monte Carlo simulations are typically characterized by many unknown parameters, many of which are difficult to obtain experimentally. Monte Carlo simulation methods do not always require truly random numbers to be useful (although, for some applications such as primality testing, unpredictability is vital). Many of the most useful techniques use deterministic, pseudorandom sequences, making it easy to test and re-run simulations."}, {"text": "Monte Carlo simulations are typically characterized by many unknown parameters, many of which are difficult to obtain experimentally. Monte Carlo simulation methods do not always require truly random numbers to be useful (although, for some applications such as primality testing, unpredictability is vital). Many of the most useful techniques use deterministic, pseudorandom sequences, making it easy to test and re-run simulations."}, {"text": "Parallel tempering, also known as replica exchange MCMC sampling, is a simulation method aimed at improving the dynamic properties of Monte Carlo method simulations of physical systems, and of Markov chain Monte Carlo (MCMC) sampling methods more generally. The replica exchange method was originally devised by Swendsen, then extended by Geyer and later developed, among others, by Giorgio Parisi.,"}]}, {"question": "What is regularization coefficient", "positive_ctxs": [{"text": "Regularized regression is a type of regression where the coefficient estimates are constrained to zero. The magnitude (size) of coefficients, as well as the magnitude of the error term, are penalized. Complex models are discouraged, primarily to avoid overfitting."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "In the field of statistical learning theory, matrix regularization generalizes notions of vector regularization to cases where the object to be learned is a matrix. The purpose of regularization is to enforce conditions, for example sparsity or smoothness, that can produce stable predictive functions. For example, in the more common vector framework, Tikhonov regularization optimizes over"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}, {"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}]}, {"question": "How do you find the centroid in K means clustering", "positive_ctxs": [{"text": "Essentially, the process goes as follows:Select k centroids. These will be the center point for each segment.Assign data points to nearest centroid.Reassign centroid value to be the calculated mean value for each cluster.Reassign data points to nearest centroid.Repeat until data points stay in the same cluster."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "The geometric centroid of a convex object always lies in the object. A non-convex object might have a centroid that is outside the figure itself. The centroid of a ring or a bowl, for example, lies in the object's central void."}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}]}, {"question": "How do you reduce variance in machine learning", "positive_ctxs": [{"text": "Increase Training Dataset Size Leaning on the law of large numbers, perhaps the simplest approach to reduce the model variance is to fit the model on more training data. In those cases where more data is not readily available, perhaps data augmentation methods can be used instead."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Suppose, for example, you have a very imbalanced validation set made of 100 elements, 95 of which are positive elements, and only 5 are negative elements (as explained in Tip 5). And suppose also you made some mistakes in designing and training your machine learning classifier, and now you have an algorithm which always predicts positive. Imagine that you are not aware of this issue."}, {"text": ", and thus introducing some bias to reduce variance. Furthermore, it is not uncommon in machine learning to have cases where"}, {"text": "The following question was posed to Jeff Hawkins in September 2011 with regard to cortical learning algorithms: \"How do you know if the changes you are making to the model are good or not?\" To which Jeff's response was \"There are two categories for the answer: one is to look at neuroscience, and the other is methods for machine intelligence. In the neuroscience realm, there are many predictions that we can make, and those can be tested."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}]}, {"question": "What is conditional probability examples", "positive_ctxs": [{"text": "Conditional probability is the probability of one event occurring with some relationship to one or more other events. For example: Event A is that it is raining outside, and it has a 0.3 (30%) chance of raining today. Event B is that you will need to go outside, and that has a probability of 0.5 (50%)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "is also a probability measure for all \u03c9 \u2208 \u03a9. An expectation of a random variable with respect to a regular conditional probability is equal to its conditional expectation."}, {"text": "What is the probability of winning the car given the player has picked door 1 and the host has opened door 3?The answer to the first question is 2/3, as is correctly shown by the \"simple\" solutions. But the answer to the second question is now different: the conditional probability the car is behind door 1 or door 2 given the host has opened door 3 (the door on the right) is 1/2. This is because Monty's preference for rightmost doors means that he opens door 3 if the car is behind door 1 (which it is originally with probability 1/3) or if the car is behind door 2 (also originally with probability 1/3)."}, {"text": "is a continuous distribution, then its probability density function is known as the conditional density function. The properties of a conditional distribution, such as the moments, are often referred to by corresponding names such as the conditional mean and conditional variance."}, {"text": "are categorical variables, a conditional probability table is typically used to represent the conditional probability. The conditional distribution contrasts with the marginal distribution of a random variable, which is its distribution without reference to the value of the other variable."}, {"text": "Beliefs depend on the available information. This idea is formalized in probability theory by conditioning. Conditional probabilities, conditional expectations, and conditional probability distributions are treated on three levels: discrete probabilities, probability density functions, and measure theory."}, {"text": "Posterior probability is a conditional probability conditioned on randomly observed data. Hence it is a random variable. For a random variable, it is important to summarize its amount of uncertainty."}]}, {"question": "What do the eigenvectors indicate", "positive_ctxs": [{"text": "Eigenvectors can be used to represent a large dimensional matrix. This means that a matrix M and a vector o can be replaced by a scalar n and a vector o. In this instance, o is the eigenvector and n is the eigenvalue and our target is to find o and n."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}]}, {"question": "What is singularity container", "positive_ctxs": [{"text": "Singularity enables users to have full control of their environment. Singularity containers can be used to package entire scientific workflows, software and libraries, and even data. The Singularity software can import your Docker images without having Docker installed or being a superuser."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "A label (as distinct from signage) is a piece of paper, plastic film, cloth, metal, or other material affixed to a container or product, on which is written or printed information or symbols about the product or item. Information printed directly on a container or article can also be considered labelling."}, {"text": "Science fiction writer Vernor Vinge named this scenario \"singularity\". Technological singularity is when accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization. Because the capabilities of such an intelligence may be impossible to comprehend, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable.Ray Kurzweil has used Moore's law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029 and predicts that the singularity will occur in 2045."}, {"text": "Science fiction writer Vernor Vinge named this scenario \"singularity\". Technological singularity is when accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization. Because the capabilities of such an intelligence may be impossible to comprehend, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable.Ray Kurzweil has used Moore's law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029 and predicts that the singularity will occur in 2045."}, {"text": "Science fiction writer Vernor Vinge named this scenario \"singularity\". Technological singularity is when accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization. Because the capabilities of such an intelligence may be impossible to comprehend, the technological singularity is an occurrence beyond which events are unpredictable or even unfathomable.Ray Kurzweil has used Moore's law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029 and predicts that the singularity will occur in 2045."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}]}, {"question": "What is the definition of squashing function in machine learning", "positive_ctxs": [{"text": "An activation function is a function used in artificial neural networks which outputs a small value for small inputs, and a larger value if its inputs exceed a threshold. If the inputs are large enough, the activation function \"fires\", otherwise it does nothing."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "is chosen to produce a scale equivariant and rotation invariant measure that doesn't go to zero for dependent variables. One interpretation of the characteristic function definition is that the variables eisX and eitY are cyclic representations of X and Y with different periods given by s and t, and the expression \u03d5X, Y(s, t) \u2212 \u03d5X(s) \u03d5Y(t) in the numerator of the characteristic function definition of distance covariance is simply the classical covariance of eisX and eitY. The characteristic function definition clearly shows that"}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}, {"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}, {"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}, {"text": "For example, because the product of characteristic functions \u03c61*\u03c62* ... *\u03c6n = 0 whenever any one of the functions equals 0, it plays the role of logical OR: IF \u03c61 = 0 OR \u03c62 = 0 OR ... OR \u03c6n = 0 THEN their product is 0. What appears to the modern reader as the representing function's logical inversion, i.e. the representing function is 0 when the function R is \"true\" or satisfied\", plays a useful role in Kleene's definition of the logical functions OR, AND, and IMPLY (p. 228), the bounded- (p. 228) and unbounded- (p. 279 ff) mu operators (Kleene (1952)) and the CASE function (p. 229)."}, {"text": "Part of the disagreement about whether a superintelligent machine would behave morally may arise from a terminological difference. Outside of the artificial intelligence field, \"intelligence\" is often used in a normatively thick manner that connotes moral wisdom or acceptance of agreeable forms of moral reasoning. At an extreme, if morality is part of the definition of intelligence, then by definition a superintelligent machine would behave morally."}]}, {"question": "Which of the following is true with regards to classical machine learning vs deep learning", "positive_ctxs": [{"text": "The most important difference between deep learning and traditional machine learning is its performance as the scale of data increases. When the data is small, deep learning algorithms don't perform that well. This is because deep learning algorithms need a large amount of data to understand it perfectly."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Deep reinforcement learning (deep RL) is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of state space."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}]}, {"question": "What is weight sharing in CNN", "positive_ctxs": [{"text": "A CNN has multiple layers. Weight sharing happens across the receptive field of the neurons(filters) in a particular layer. Weights are the numbers within each filter. These filters act on a certain receptive field/ small section of the image. When the filter moves through the image, the filter does not change."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In order to learn critical acoustic-phonetic features (for example formant transitions, bursts, frication, etc.) without first requiring precise localization, the TDNN is trained time-shift-invariantly. Time-shift invariance is achieved through weight sharing across time during training: Time shifted copies of the TDNN are made over the input range (from left to right in Fig.1)."}, {"text": "Sometimes, the parameter sharing assumption may not make sense. This is especially the case when the input images to a CNN have some specific centered structure; for which we expect completely different features to be learned on different spatial locations. One practical example is when the inputs are faces that have been centered in the image: we might expect different eye-specific or hair-specific features to be learned in different parts of the image."}, {"text": "Sometimes, the parameter sharing assumption may not make sense. This is especially the case when the input images to a CNN have some specific centered structure; for which we expect completely different features to be learned on different spatial locations. One practical example is when the inputs are faces that have been centered in the image: we might expect different eye-specific or hair-specific features to be learned in different parts of the image."}, {"text": "Sometimes, the parameter sharing assumption may not make sense. This is especially the case when the input images to a CNN have some specific centered structure; for which we expect completely different features to be learned on different spatial locations. One practical example is when the inputs are faces that have been centered in the image: we might expect different eye-specific or hair-specific features to be learned in different parts of the image."}, {"text": "Sometimes, the parameter sharing assumption may not make sense. This is especially the case when the input images to a CNN have some specific centered structure; for which we expect completely different features to be learned on different spatial locations. One practical example is when the inputs are faces that have been centered in the image: we might expect different eye-specific or hair-specific features to be learned in different parts of the image."}, {"text": "Sometimes, the parameter sharing assumption may not make sense. This is especially the case when the input images to a CNN have some specific centered structure; for which we expect completely different features to be learned on different spatial locations. One practical example is when the inputs are faces that have been centered in the image: we might expect different eye-specific or hair-specific features to be learned in different parts of the image."}, {"text": "Sometimes, the parameter sharing assumption may not make sense. This is especially the case when the input images to a CNN have some specific centered structure; for which we expect completely different features to be learned on different spatial locations. One practical example is when the inputs are faces that have been centered in the image: we might expect different eye-specific or hair-specific features to be learned in different parts of the image."}]}, {"question": "Is there a relationship between the correlation coefficient and the slope of a linear regression line", "positive_ctxs": [{"text": "If we assume that there is some variation in our data, we will be able to disregard the possibility that either of these standard deviations is zero. Therefore the sign of the correlation coefficient will be the same as the sign of the slope of the regression line."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is a corollary of the Cauchy\u2013Schwarz inequality that the absolute value of the Pearson correlation coefficient is not bigger than 1. Therefore, the value of a correlation coefficient ranges between -1 and +1. The correlation coefficient is +1 in the case of a perfect direct (increasing) linear relationship (correlation), \u22121 in the case of a perfect inverse (decreasing) linear relationship (anticorrelation), and some value in the open interval"}, {"text": "It is a corollary of the Cauchy\u2013Schwarz inequality that the absolute value of the Pearson correlation coefficient is not bigger than 1. Therefore, the value of a correlation coefficient ranges between -1 and +1. The correlation coefficient is +1 in the case of a perfect direct (increasing) linear relationship (correlation), \u22121 in the case of a perfect inverse (decreasing) linear relationship (anticorrelation), and some value in the open interval"}, {"text": "In statistics, the Pearson correlation coefficient (PCC, pronounced ), also referred to as Pearson's r, the Pearson product-moment correlation coefficient (PPMCC), or the bivariate correlation, is a measure of linear correlation between two sets of data. It is the covariance of two variables, divided by the product of their standard deviations; thus it is essentially a normalised measurement of the covariance, such that the result always has a value between -1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationship or correlation."}, {"text": "In statistics, the Pearson correlation coefficient (PCC, pronounced ), also referred to as Pearson's r, the Pearson product-moment correlation coefficient (PPMCC), or the bivariate correlation, is a measure of linear correlation between two sets of data. It is the covariance of two variables, divided by the product of their standard deviations; thus it is essentially a normalised measurement of the covariance, such that the result always has a value between -1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationship or correlation."}, {"text": "Mathematically, one simply divides the covariance of the two variables by the product of their standard deviations. Karl Pearson developed the coefficient from a similar but slightly different idea by Francis Galton.A Pearson product-moment correlation coefficient attempts to establish a line of best fit through a dataset of two variables by essentially laying out the expected values and the resulting Pearson's correlation coefficient indicates how far away the actual dataset is from the expected values. Depending on the sign of our Pearson's correlation coefficient, we can end up with either a negative or positive correlation if there is any sort of relationship between the variables of our dataset."}, {"text": "Mathematically, one simply divides the covariance of the two variables by the product of their standard deviations. Karl Pearson developed the coefficient from a similar but slightly different idea by Francis Galton.A Pearson product-moment correlation coefficient attempts to establish a line of best fit through a dataset of two variables by essentially laying out the expected values and the resulting Pearson's correlation coefficient indicates how far away the actual dataset is from the expected values. Depending on the sign of our Pearson's correlation coefficient, we can end up with either a negative or positive correlation if there is any sort of relationship between the variables of our dataset."}, {"text": "When a relationship between the differences and the true value was identified (i.e., a significant slope of the regression line), regression-based 95% limits of agreement should be provided."}]}, {"question": "What is the difference between generative and discriminative models", "positive_ctxs": [{"text": "In General, A Discriminative model \u200cmodels the decision boundary between the classes. A Generative Model \u200cexplicitly models the actual distribution of each class. A Discriminative model \u200clearns the conditional probability distribution p(y|x). Both of these models were generally used in supervised learning problems."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "to generate new data similar to existing data. On the other hand, discriminative algorithms generally give better performance in classification tasks.Despite the fact that discriminative models do not need to model the distribution of the observed variables, they cannot generally express complex relationships between the observed and target variables. They don't necessarily perform better than generative models at classification and regression tasks."}, {"text": "In addition, most discriminative models are inherently supervised and cannot easily support unsupervised learning. Application-specific details ultimately dictate the suitability of selecting a discriminative versus generative model."}, {"text": "Classifiers computed without using a probability model are also referred to loosely as \"discriminative\".The distinction between these last two classes is not consistently made; Jebara (2004) refers to these three classes as generative learning, conditional learning, and discriminative learning, but Ng & Jordan (2002) only distinguish two classes, calling them generative classifiers (joint distribution) and discriminative classifiers (conditional distribution or no distribution), not distinguishing between the latter two classes. Analogously, a classifier based on a generative model is a generative classifier, while a classifier based on a discriminative model is a discriminative classifier, though this term also refers to classifiers that are not based on a model."}, {"text": "Discriminative models and generative models also differ in introducing the posterior possibility. To maintain the least expected loss, the minimization of result's misclassification should be acquired. In the discriminative model, the posterior probabilities,"}, {"text": "using Bayes' theorem.Discriminative models, as opposed to generative models, do not allow one to generate samples from the joint distribution of observed and target variables. However, for tasks such as classification and regression that do not require the joint distribution, discriminative models can yield superior performance (in part because they have fewer variables to compute). On the other hand, generative models are typically more flexible than discriminative models in expressing dependencies in complex learning tasks."}, {"text": "In the repeated experiments, logistic regression and naive Bayes are applied here for different models on binary classification task, discriminative learning results in lower asymptotic errors, while generative one results in higher asymptotic errors faster. However, in Ulusoy and Bishop's joint work, Comparison of Generative and Discriminative Techniques for Object Detection and Classification, they state that the above statement is true only when the model is the appropriate one for data (i.e.the data distribution is correctly modeled by the generative model)."}, {"text": "The Fisher kernel was introduced in 1998. It combines the advantages of generative statistical models (like the hidden Markov model) and those of discriminative methods (like support vector machines):"}]}, {"question": "How do you find the Fourier transform of a signal", "positive_ctxs": [{"text": "In signal processing, the Fourier transform can reveal important characteristics of a signal, namely, its frequency components. y k + 1 = \u2211 j = 0 n - 1 \u03c9 j k x j + 1 . \u03c9 = e - 2 \u03c0 i / n is one of n complex roots of unity where i is the imaginary unit. For x and y , the indices j and k range from 0 to n - 1 ."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Since the Fourier transform of the Gaussian function yields a Gaussian function, the signal (preferably after being divided into overlapping windowed blocks) can be transformed with a Fast Fourier transform, multiplied with a Gaussian function and transformed back. This is the standard procedure of applying an arbitrary finite impulse response filter, with the only difference that the Fourier transform of the filter window is explicitly known."}, {"text": "Let X(f) be the Fourier transform of any function, x(t), whose samples at some interval, T, equal the x[n] sequence. Then the discrete-time Fourier transform (DTFT) is a Fourier series representation of a periodic summation of X(f):"}, {"text": "The spectrum analyzer measures the magnitude of the short-time Fourier transform (STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density."}, {"text": "Discrete-time Fourier transform (DTFT): Equivalent to the Fourier transform of a \"continuous\" function that is constructed from the discrete input function by using the sample values to modulate a Dirac comb. When the sample values are derived by sampling a function on the real line, \u0192(x), the DTFT is equivalent to a periodic summation of the Fourier transform of \u0192. The DTFT output is always periodic (cyclic)."}, {"text": "In this case the Fourier series is finite and its value is equal to the sampled values at all points. The set of coefficients is known as the discrete Fourier transform (DFT) of the given sample sequence. The DFT is one of the key tools of digital signal processing, a field whose applications include radar, speech encoding, image compression."}, {"text": "Discrete Fourier transform (general).The use of all of these transforms is greatly facilitated by the existence of efficient algorithms based on a fast Fourier transform (FFT). The Nyquist\u2013Shannon sampling theorem is critical for understanding the output of such discrete transforms."}, {"text": "This is G, since the Fourier transform of this integral is easy. Each fixed \u03c4 contribution is a Gaussian in x, whose Fourier transform is another Gaussian of reciprocal width in k."}]}, {"question": "What is transfer learning and how is it useful", "positive_ctxs": [{"text": "Transfer learning is useful when you have insufficient data for a new domain you want handled by a neural network and there is a big pre-existing data pool that can be transferred to your problem."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Sometimes, it is useful to study the opposite question and ask how often the random variable is above a particular level. This is called the complementary cumulative distribution function (ccdf) or simply the tail distribution or exceedance, and is defined as"}, {"text": "Traditionally Multi-task learning and transfer of knowledge are applied to stationary learning settings. Their extension to non-stationary environments is termed Group online adaptive learning (GOAL). Sharing information could be particularly useful if learners operate in continuously changing environments, because a learner could benefit from previous experience of another learner to quickly adapt to their new environment."}, {"text": "While rule-based machine learning is conceptually a type of rule-based system, it is distinct from traditional rule-based systems, which are often hand-crafted, and other rule-based decision makers. This is because rule-based machine learning applies some form of learning algorithm to automatically identify useful rules, rather than a human needing to apply prior domain knowledge to manually construct rules and curate a rule set."}, {"text": "Related to multi-task learning is the concept of knowledge transfer. Whereas traditional multi-task learning implies that a shared representation is developed concurrently across tasks, transfer of knowledge implies a sequentially shared representation. Large scale machine learning projects such as the deep convolutional neural network GoogLeNet, an image-based object classifier, can develop robust representations which may be useful to further algorithms learning related tasks."}, {"text": "The history of learning vector-valued functions is closely linked to transfer learning- storing knowledge gained while solving one problem and applying it to a different but related problem. The fundamental motivation for transfer learning in the field of machine learning was discussed in a NIPS-95 workshop on \u201cLearning to Learn,\u201d which focused on the need for lifelong machine learning methods that retain and reuse previously learned knowledge. Research on transfer learning has attracted much attention since 1995 in different names: learning to learn, lifelong learning, knowledge transfer, inductive transfer, multitask learning, knowledge consolidation, context-sensitive learning, knowledge-based inductive bias, metalearning, and incremental/cumulative learning."}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}]}, {"question": "What is agent system in artificial intelligence", "positive_ctxs": [{"text": "In artificial intelligence, an intelligent agent (IA) refers to an autonomous entity which acts, directing its activity towards achieving goals (i.e. it is an agent), upon an environment using observation through sensors and consequent actuators (i.e. it is intelligent)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The learning system here is similar to any other neural styled networks, which is through modifying the connection strength between the demons; in other words, how the demons respond to each other's yelling. This multiple agent approach to human information processing became the assumption for many modern artificial intelligence systems."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "In computer science, a rule-based system is used to store and manipulate knowledge to interpret information in a useful way. It is often used in artificial intelligence applications and research."}, {"text": "Artificial intelligence (or AI) is both the intelligence that is demonstrated by machines and the branch of computer science which aims to create it, through \"the study and design of intelligent agents\" or \"rational agents\", where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. Kaplan and Haenlein define artificial intelligence as \u201ca system\u2019s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation\u201d. Achievements in artificial intelligence include constrained and well-defined problems such as games, crossword-solving and optical character recognition and a few more general problems such as autonomous cars."}, {"text": "Action selection is a way of characterizing the most basic problem of intelligent systems: what to do next. In artificial intelligence and computational cognitive science, \"the action selection problem\" is typically associated with intelligent agents and animats\u2014artificial systems that exhibit complex behaviour in an agent environment. The term is also sometimes used in ethology or animal behavior."}, {"text": "A state space is the set of all possible configurations of a system. It is a useful abstraction for reasoning about the behavior of a given system and is widely used in the fields of artificial intelligence and game theory."}]}, {"question": "Does Lasso regression take care of Multicollinearity", "positive_ctxs": [{"text": "Lasso Regression Another Tolerant Method for dealing with multicollinearity known as Least Absolute Shrinkage and Selection Operator (LASSO) regression, solves the same constrained optimization problem as ridge regression, but uses the L1 norm rather than the L2 norm as a measure of complexity."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In high-dimension, when number of covariates p is large compared to the sample size n, the LASSO method is one of the classical model-selection strategies. Tibshirani (1997) has proposed a Lasso procedure for the proportional hazard regression parameter. The Lasso estimator of the regression parameter \u03b2 is defined as the minimizer of the opposite of the Cox partial log-likelihood under an L1-norm type constraint."}, {"text": "In high-dimension, when number of covariates p is large compared to the sample size n, the LASSO method is one of the classical model-selection strategies. Tibshirani (1997) has proposed a Lasso procedure for the proportional hazard regression parameter. The Lasso estimator of the regression parameter \u03b2 is defined as the minimizer of the opposite of the Cox partial log-likelihood under an L1-norm type constraint."}, {"text": "One of the prime differences between Lasso and ridge regression is that in ridge regression, as the penalty is increased, all parameters are reduced while still remaining non-zero, while in Lasso, increasing the penalty will cause more and more of the parameters to be driven to zero. This is an advantage of Lasso over ridge regression, as driving parameters to zero deselects the features from the regression. Thus, Lasso automatically selects more relevant features and discards the others, whereas Ridge regression never fully discards any features."}, {"text": "One of the prime differences between Lasso and ridge regression is that in ridge regression, as the penalty is increased, all parameters are reduced while still remaining non-zero, while in Lasso, increasing the penalty will cause more and more of the parameters to be driven to zero. This is an advantage of Lasso over ridge regression, as driving parameters to zero deselects the features from the regression. Thus, Lasso automatically selects more relevant features and discards the others, whereas Ridge regression never fully discards any features."}, {"text": "Lasso was introduced in order to improve the prediction accuracy and interpretability of regression models. It selects a reduced set of the known covariates for use in a model."}, {"text": "Lasso can set coefficients to zero, while the superficially similar ridge regression cannot. This is due to the difference in the shape of their constraint boundaries. Both lasso and ridge regression can be interpreted as minimizing the same objective function"}, {"text": "This regularization function, while attractive for the sparsity that it guarantees, is very difficult to solve because doing so requires optimization of a function that is not even weakly convex. Lasso regression is the minimal possible relaxation of"}]}, {"question": "What are classification algorithms in machine learning", "positive_ctxs": [{"text": "Classification is one of the most fundamental concepts in data science. Classification algorithms are predictive calculations used to assign data to preset categories by analyzing sets of training data.\u1042\u1040\u1042\u1040\u104a \u1029 \u1042\u1046"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification algorithms.ROC curves are also used in verification of forecasts in meteorology."}, {"text": "ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification algorithms.ROC curves are also used in verification of forecasts in meteorology."}, {"text": "ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification algorithms.ROC curves are also used in verification of forecasts in meteorology."}, {"text": "ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification algorithms.ROC curves are also used in verification of forecasts in meteorology."}, {"text": "ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification algorithms.ROC curves are also used in verification of forecasts in meteorology."}, {"text": "ROC curves also proved useful for the evaluation of machine learning techniques. The first application of ROC in machine learning was by Spackman who demonstrated the value of ROC curves in comparing and evaluating different classification algorithms.ROC curves are also used in verification of forecasts in meteorology."}, {"text": "Also, several digital camera systems incorporate an automatic pixel binning function to improve image contrast.Binning is also used in machine learning to speed up the decision-tree boosting method for supervised classification and regression in algorithms such as Microsoft's LightGBM and scikit-learn's Histogram-based Gradient Boosting Classification Tree."}]}, {"question": "What is loss in a neural network", "positive_ctxs": [{"text": "The Loss Function is one of the important components of Neural Networks. Loss is nothing but a prediction error of Neural Net. And the method to calculate the loss is called Loss Function. In simple words, the Loss is used to calculate the gradients. And gradients are used to update the weights of the Neural Net."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "LeNet is a convolutional neural network structure proposed by Yann LeCun et al. In general, LeNet refers to lenet-5 and is a simple convolutional neural network. Convolutional neural networks are a kind of feed-forward neural network whose artificial neurons can respond to a part of the surrounding cells in the coverage range and perform well in large-scale image processing."}, {"text": "A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus a neural network is either a biological neural network, made up of real biological neurons, or an artificial neural network, for solving artificial intelligence (AI) problems. The connections of the biological neuron are modeled as weights."}, {"text": "The delta rule is derived by attempting to minimize the error in the output of the neural network through gradient descent. The error for a neural network with"}, {"text": "is a set of weights. The optimization problem of finding alpha is readily solved through neural networks, hence a \"meta-network\" where each \"neuron\" is in fact an entire neural network can be trained, and the synaptic weights of the final network is the weight applied to each expert. This is known as a linear combination of experts.It can be seen that most forms of neural networks are some subset of a linear combination: the standard neural net (where only one expert is used) is simply a linear combination with all"}, {"text": "The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks.Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:"}, {"text": "The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks.Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:"}, {"text": "The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks.Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:"}]}, {"question": "How do I train a deep neural network", "positive_ctxs": [{"text": "How to train your Deep Neural NetworkTraining data. Choose appropriate activation functions. Number of Hidden Units and Layers. Weight Initialization. Learning Rates. Hyperparameter Tuning: Shun Grid Search - Embrace Random Search. Learning Methods. Keep dimensions of weights in the exponential power of 2.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}]}, {"question": "What does model calibration mean", "positive_ctxs": [{"text": "Model calibration is the process of adjustment of the model parameters and forcing within the margins of the uncertainties (in model parameters and / or model forcing) to obtain a model representation of the processes of interest that satisfies pre-agreed criteria (Goodness-of-Fit or Cost Function)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "In machine learning, Platt scaling or Platt calibration is a way of transforming the outputs of a classification model into a probability distribution over classes. The method was invented by John Platt in the context of support vector machines,"}, {"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}, {"text": "Brigo, Damiano; Mercurio, Fabio (June 2002). \"Lognormal-mixture dynamics and calibration to market volatility smiles\". International Journal of Theoretical and Applied Finance."}, {"text": "Brigo, Damiano; Mercurio, Fabio (June 2002). \"Lognormal-mixture dynamics and calibration to market volatility smiles\". International Journal of Theoretical and Applied Finance."}, {"text": "Brigo, Damiano; Mercurio, Fabio (June 2002). \"Lognormal-mixture dynamics and calibration to market volatility smiles\". International Journal of Theoretical and Applied Finance."}]}, {"question": "Is a statistics degree useful", "positive_ctxs": [{"text": "Statistics is a very good major in terms of job market and salary scale, it also open doors for many graduate courses, unless you are poor at math ,statistics is worth taking."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "To make data gathered from statistics believable and accurate, the sample taken must be representative of the whole. According to Huff, \"The dependability of a sample can be destroyed by [bias]... allow yourself some degree of skepticism. \"To assist in the understanding of statistics Huff proposed a series of questions to be asked in each case:"}, {"text": "To make data gathered from statistics believable and accurate, the sample taken must be representative of the whole. According to Huff, \"The dependability of a sample can be destroyed by [bias]... allow yourself some degree of skepticism. \"To assist in the understanding of statistics Huff proposed a series of questions to be asked in each case:"}, {"text": "To make data gathered from statistics believable and accurate, the sample taken must be representative of the whole. According to Huff, \"The dependability of a sample can be destroyed by [bias]... allow yourself some degree of skepticism. \"To assist in the understanding of statistics Huff proposed a series of questions to be asked in each case:"}, {"text": "To make data gathered from statistics believable and accurate, the sample taken must be representative of the whole. According to Huff, \"The dependability of a sample can be destroyed by [bias]... allow yourself some degree of skepticism. \"To assist in the understanding of statistics Huff proposed a series of questions to be asked in each case:"}, {"text": "To make data gathered from statistics believable and accurate, the sample taken must be representative of the whole. According to Huff, \"The dependability of a sample can be destroyed by [bias]... allow yourself some degree of skepticism. \"To assist in the understanding of statistics Huff proposed a series of questions to be asked in each case:"}, {"text": "The odds ratio (OR) is another useful effect size. It is appropriate when the research question focuses on the degree of association between two binary variables. For example, consider a study of spelling ability."}, {"text": "3, which has a goat. He then says to you, \"Do you want to pick door No. Is it to your advantage to switch your choice?"}]}, {"question": "Which method is used for data preprocessing in machine learning", "positive_ctxs": [{"text": "In this module, we have discussed on various data preprocessing methods for Machine Learning such as rescaling, binarizing, standardizing, one hot encoding, and label encoding."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The origins of data preprocessing are located in data mining. The idea is to aggregate existing information and search in the content. Later it was recognized, that for machine learning and neural networks a data preprocessing step is needed too."}, {"text": "Data preprocessing is an important step in the data mining process. The phrase \"garbage in, garbage out\" is particularly applicable to data mining and machine learning projects. Data-gathering methods are often loosely controlled, resulting in out-of-range values (e.g., Income: \u2212100), impossible data combinations (e.g., Sex: Male, Pregnant: Yes), and missing values, etc."}, {"text": "Often, data preprocessing is the most important phase of a machine learning project, especially in computational biology.If there is much irrelevant and redundant information present or noisy and unreliable data, then knowledge discovery during the training phase is more difficult. Data preparation and filtering steps can take considerable amount of processing time. Data preprocessing includes cleaning, Instance selection, normalization, transformation, feature extraction and selection, etc."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "Why KNN algorithm is used", "positive_ctxs": [{"text": "KNN algorithm is one of the simplest classification algorithm and it is one of the most used learning algorithms. KNN is a non-parametric, lazy learning algorithm. Its purpose is to use a database in which the data points are separated into several classes to predict the classification of a new sample point."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If we simply compared the methods based on their in-sample error rates, the KNN method would likely appear to perform better, since it is more flexible and hence more prone to overfitting compared to the SVM method."}, {"text": "If we simply compared the methods based on their in-sample error rates, the KNN method would likely appear to perform better, since it is more flexible and hence more prone to overfitting compared to the SVM method."}, {"text": "If we simply compared the methods based on their in-sample error rates, the KNN method would likely appear to perform better, since it is more flexible and hence more prone to overfitting compared to the SVM method."}, {"text": "Akaike information criterion (AIC) method of model selection, and a comparison with MML: Dowe, D.L. ; Gardner, S.; Oppy, G. (Dec 2007). Why Simplicity is no Problem for Bayesians\"."}, {"text": "An interesting fact is that the original wiki software was created in 1995, but it took at least another six years for large wiki-based collaborative projects to appear. Why did it take so long? One explanation is that the original wiki software lacked a selection operation and hence couldn't effectively support content evolution."}, {"text": "The Metropolis\u2013Hastings algorithm is the most commonly used Monte Carlo algorithm to calculate Ising model estimations. The algorithm first chooses selection probabilities g(\u03bc, \u03bd), which represent the probability that state \u03bd is selected by the algorithm out of all states, given that one is in state \u03bc. It then uses acceptance probabilities A(\u03bc, \u03bd) so that detailed balance is satisfied."}, {"text": "\"The art of a right decision: Why decision makers want to know the odds-algorithm.\" Newsletter of the European Mathematical Society, Issue 62, 14\u201320, (2006)"}]}, {"question": "How is analysis of covariance done", "positive_ctxs": [{"text": "The Analysis of covariance (ANCOVA) is done by using linear regression. This means that Analysis of covariance (ANCOVA) assumes that the relationship between the independent variable and the dependent variable must be linear in nature."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Parameter estimation is done by comparing the actual covariance matrices representing the relationships between variables and the estimated covariance matrices of the best fitting model. This is obtained through numerical maximization via expectation\u2013maximization of a fit criterion as provided by maximum likelihood estimation, quasi-maximum likelihood estimation, weighted least squares or asymptotically distribution-free methods. This is often accomplished by using a specialized SEM analysis program, of which several exist."}, {"text": "When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified."}, {"text": "When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified."}, {"text": "When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified."}, {"text": "When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified."}, {"text": "When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified."}, {"text": "Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by"}]}, {"question": "What is the difference between quota sampling and stratified sampling", "positive_ctxs": [{"text": "The difference between quota sampling and stratified sampling is: although both \"group\" participants by an important characteristic, stratified sampling relies on random selection within each group, while quota sampling relies on convenience sampling within each group."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "Quota sampling is the non-probability version of stratified sampling. In stratified sampling, subsets of the population are created so that each subset has a common characteristic, such as gender. Random sampling chooses a number of subjects from each subset with, unlike a quota sample, each potential subject having a known probability of being selected."}, {"text": "OversamplingChoice-based sampling is one of the stratified sampling strategies. In choice-based sampling, the data are stratified on the target and a sample is taken from each stratum so that the rare target class will be more represented in the sample. The model is then built on this biased sample."}, {"text": "OversamplingChoice-based sampling is one of the stratified sampling strategies. In choice-based sampling, the data are stratified on the target and a sample is taken from each stratum so that the rare target class will be more represented in the sample. The model is then built on this biased sample."}, {"text": "The sampling error is the error caused by observing a sample instead of the whole population. The sampling error is the difference between a sample statistic used to estimate a population parameter and the actual but unknown value of the parameter."}, {"text": "A common motivation of cluster sampling is to reduce costs by increasing sampling efficiency. This contrasts with stratified sampling where the motivation is to increase precision."}]}, {"question": "Can SVM used for regression", "positive_ctxs": [{"text": "Support Vector Machine can also be used as a regression method, maintaining all the main features that characterize the algorithm (maximal margin). The Support Vector Regression (SVR) uses the same principles as the SVM for classification, with only a few minor differences."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The SVM algorithm has been widely applied in the biological and other sciences. They have been used to classify proteins with up to 90% of the compounds classified correctly. Permutation tests based on SVM weights have been suggested as a mechanism for interpretation of SVM models."}, {"text": "The SVM algorithm has been widely applied in the biological and other sciences. They have been used to classify proteins with up to 90% of the compounds classified correctly. Permutation tests based on SVM weights have been suggested as a mechanism for interpretation of SVM models."}, {"text": "The SVM algorithm has been widely applied in the biological and other sciences. They have been used to classify proteins with up to 90% of the compounds classified correctly. Permutation tests based on SVM weights have been suggested as a mechanism for interpretation of SVM models."}, {"text": "The SVM algorithm has been widely applied in the biological and other sciences. They have been used to classify proteins with up to 90% of the compounds classified correctly. Permutation tests based on SVM weights have been suggested as a mechanism for interpretation of SVM models."}, {"text": "The SVM algorithm has been widely applied in the biological and other sciences. They have been used to classify proteins with up to 90% of the compounds classified correctly. Permutation tests based on SVM weights have been suggested as a mechanism for interpretation of SVM models."}, {"text": "The reduction immediately enables the use of highly optimized SVM solvers for elastic net problems. It also enables the use of GPU acceleration, which is often already used for large-scale SVM solvers. The reduction is a simple transformation of the original data and regularization constants"}, {"text": "Machine learning methods for analysis of neuroimaging data are used to help diagnose stroke. Three-dimensional CNN and SVM methods are often used."}]}, {"question": "What do you mean by predictive analytics", "positive_ctxs": [{"text": "Predictive analytics is the use of data, statistical algorithms and machine learning techniques to identify the likelihood of future outcomes based on historical data. The goal is to go beyond knowing what has happened to providing a best assessment of what will happen in the future."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The emergence of Big Data in the late 2000s led to a heightened interest in the applications of unstructured data analytics in contemporary fields such as predictive analytics and root cause analysis."}, {"text": "What is more there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.A simple example of modus ponens often used in introductory logic books is \"If you are human then you are mortal\". This can be represented in pseudocode as:"}, {"text": "Differentiating the fields of educational data mining (EDM) and learning analytics (LA) has been a concern of several researchers. George Siemens takes the position that educational data mining encompasses both learning analytics and academic analytics, the former of which is aimed at governments, funding agencies, and administrators instead of learners and faculty. Baepler and Murdoch define academic analytics as an area that \"...combines select institutional data, statistical analysis, and predictive modeling to create intelligence upon which learners, instructors, or administrators can change academic behavior\"."}, {"text": "You are allowed to select k of these n boxes all at once and break them open simultaneously, gaining access to k keys. What is the probability that using these keys you can open all n boxes, where you use a found key to open the box it belongs to and repeat."}, {"text": "Aspect is unusual in ASL in that transitive verbs derived for aspect lose their transitivity. That is, while you can sign 'dog chew bone' for the dog chewed on a bone, or 'she look-at me' for she looked at me, you cannot do the same in the durative to mean the dog gnawed on the bone or she stared at me. Instead, you must use other strategies, such as a topic construction (see below) to avoid having an object for the verb."}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}]}, {"question": "How do I implement a Bayesian optimization", "positive_ctxs": [{"text": "The Bayesian Optimization algorithm can be summarized as follows:Select a Sample by Optimizing the Acquisition Function.Evaluate the Sample With the Objective Function.Update the Data and, in turn, the Surrogate Function.Go To 1."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Bayesian optimization is a global optimization method for noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization builds a probabilistic model of the function mapping from hyperparameter values to the objective evaluated on a validation set. By iteratively evaluating a promising hyperparameter configuration based on the current model, and then updating it, Bayesian optimization, aims to gather observations revealing as much information as possible about this function and, in particular, the location of the optimum."}, {"text": "Bayesian optimization is a global optimization method for noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization builds a probabilistic model of the function mapping from hyperparameter values to the objective evaluated on a validation set. By iteratively evaluating a promising hyperparameter configuration based on the current model, and then updating it, Bayesian optimization, aims to gather observations revealing as much information as possible about this function and, in particular, the location of the optimum."}, {"text": "Bayesian optimization is a global optimization method for noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization builds a probabilistic model of the function mapping from hyperparameter values to the objective evaluated on a validation set. By iteratively evaluating a promising hyperparameter configuration based on the current model, and then updating it, Bayesian optimization, aims to gather observations revealing as much information as possible about this function and, in particular, the location of the optimum."}, {"text": "Bayesian optimization is a global optimization method for noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization builds a probabilistic model of the function mapping from hyperparameter values to the objective evaluated on a validation set. By iteratively evaluating a promising hyperparameter configuration based on the current model, and then updating it, Bayesian optimization, aims to gather observations revealing as much information as possible about this function and, in particular, the location of the optimum."}, {"text": "Bayesian optimization is a sequential design strategy for global optimization of black-box functions that does not assume any functional forms. It is usually employed to optimize expensive-to-evaluate functions."}, {"text": "The likelihood-ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e. on what probability of Type I error is considered tolerable (Type I errors consist of the rejection of a null hypothesis that is true)."}]}, {"question": "How do you deal with Overfitting in deep learning", "positive_ctxs": [{"text": "Handling overfittingReduce the network's capacity by removing layers or reducing the number of elements in the hidden layers.Apply regularization , which comes down to adding a cost to the loss function for large weights.Use Dropout layers, which will randomly remove certain features by setting them to zero."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "Economist Paul Krugman agrees mostly with the Rawlsian approach in that he would like to \"create the society each of us would want if we didn\u2019t know in advance who we\u2019d be\". Krugman elaborated: \"If you admit that life is unfair, and that there's only so much you can do about that at the starting line, then you can try to ameliorate the consequences of that unfairness\"."}]}, {"question": "What is the difference between a model parameter and a learning algorithm\u2019s hyper parameter", "positive_ctxs": [{"text": "In summary, model parameters are estimated from data automatically and model hyperparameters are set manually and are used in processes to help estimate model parameters. Model hyperparameters are often referred to as parameters because they are the parts of the machine learning that must be set manually and tuned."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "An estimator is a decision rule used for estimating a parameter. In this case the set of actions is the parameter space, and a loss function details the cost of the discrepancy between the true value of the parameter and the estimated value. For example, in a linear model with a single scalar parameter"}, {"text": "The goal is to find the parameter values for the model that \"best\" fits the data. The fit of a model to a data point is measured by its residual, defined as the difference between the actual value of the dependent variable and the value predicted by the model:"}, {"text": "The goal is to find the parameter values for the model that \"best\" fits the data. The fit of a model to a data point is measured by its residual, defined as the difference between the actual value of the dependent variable and the value predicted by the model:"}, {"text": "The sampling error is the error caused by observing a sample instead of the whole population. The sampling error is the difference between a sample statistic used to estimate a population parameter and the actual but unknown value of the parameter."}, {"text": "That is, the difference of two independent identically distributed extreme-value-distributed variables follows the logistic distribution, where the first parameter is unimportant. This is understandable since the first parameter is a location parameter, i.e. it shifts the mean by a fixed amount, and if two values are both shifted by the same amount, their difference remains the same."}, {"text": "That is, the difference of two independent identically distributed extreme-value-distributed variables follows the logistic distribution, where the first parameter is unimportant. This is understandable since the first parameter is a location parameter, i.e. it shifts the mean by a fixed amount, and if two values are both shifted by the same amount, their difference remains the same."}, {"text": "Momentum is analogous to a ball rolling down a hill; we want the ball to settle at the lowest point of the hill (corresponding to the lowest error). Momentum both speeds up the learning (increasing the learning rate) when the error cost gradient is heading in the same direction for a long time and also avoids local minima by 'rolling over' small bumps. Momentum is controlled by a hyper parameter analogous to a ball's mass which must be chosen manually\u2014too high and the ball will roll over minima which we wish to find, too low and it will not fulfil its purpose."}]}, {"question": "What is the use of convolutional neural network", "positive_ctxs": [{"text": "A Convolutional neural network (CNN) is a neural network that has one or more convolutional layers and are used mainly for image processing, classification, segmentation and also for other auto correlated data. A convolution is essentially sliding a filter over the input."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "LeNet is a convolutional neural network structure proposed by Yann LeCun et al. In general, LeNet refers to lenet-5 and is a simple convolutional neural network. Convolutional neural networks are a kind of feed-forward neural network whose artificial neurons can respond to a part of the surrounding cells in the coverage range and perform well in large-scale image processing."}, {"text": "The term receptive field is also used in the context of artificial neural networks, most often in relation to convolutional neural networks (CNNs). So, in a neural network context, the receptive field is defined as the size of the region in the input that produces the feature. Basically, it is a measure of association of an output feature (of any layer) to the input region (patch)."}, {"text": "The DeepMind system used a deep convolutional neural network, with layers of tiled convolutional filters to mimic the effects of receptive fields. Reinforcement learning is unstable or divergent when a nonlinear function approximator such as a neural network is used to represent Q. This instability comes from the correlations present in the sequence of observations, the fact that small updates to Q may significantly change the policy and the data distribution, and the correlations between Q and the target values."}, {"text": "The penetrating face product is used in the tensor-matrix theory of digital antenna arrays. This operation can also be used in artificial neural network models, specifically convolutional layers."}, {"text": "As of August 2018, the best performance of a single convolutional neural network trained on MNIST training data using no data augmentation is 0.25 percent error rate. Also, the Parallel Computing Center (Khmelnytskyi, Ukraine) obtained an ensemble of only 5 convolutional neural networks which performs on MNIST at 0.21 percent error rate. Some images in the testing dataset are barely readable and may prevent reaching test error rates of 0%."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Intuitively, the exact location of a feature is less important than its rough location relative to other features. This is the idea behind the use of pooling in convolutional neural networks. The pooling layer serves to progressively reduce the spatial size of the representation, to reduce the number of parameters, memory footprint and amount of computation in the network, and hence to also control overfitting."}]}, {"question": "What is Concept shift", "positive_ctxs": [{"text": "Concept shift is closely related to concept drift. This occurs when a model learned from data sampled from one distribution needs to be applied to data drawn from another."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Concept shift (same features, different labels): local nodes may share the same features but some of them correspond to different labels at different local nodes. For example, in natural language processing, the sentiment analysis may yield different sentiments even if the same text is observed."}, {"text": "Concept shift (same features, different labels): local nodes may share the same features but some of them correspond to different labels at different local nodes. For example, in natural language processing, the sentiment analysis may yield different sentiments even if the same text is observed."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Concept shift (same label, different features): local nodes may share the same labels but some of them correspond to different features at different local nodes. For example, images that depict a particular object can vary according to the weather condition in which they were captured."}, {"text": "Concept shift (same label, different features): local nodes may share the same labels but some of them correspond to different features at different local nodes. For example, images that depict a particular object can vary according to the weather condition in which they were captured."}, {"text": "One of the advantages of mean shift over k-means is that the number of clusters is not pre-specified, because mean shift is likely to find only a few clusters if only a small number exist. However, mean shift can be much slower than k-means, and still requires selection of a bandwidth parameter. Mean shift has soft variants."}, {"text": "One of the advantages of mean shift over k-means is that the number of clusters is not pre-specified, because mean shift is likely to find only a few clusters if only a small number exist. However, mean shift can be much slower than k-means, and still requires selection of a bandwidth parameter. Mean shift has soft variants."}]}, {"question": "Can I switch around the null and alternative hypothesis in hypothesis testing", "positive_ctxs": [{"text": "Null and alternate hypothesis are different and you can't interchange them. Alternate hypothesis is just the opposite of null which means there is a statistical difference in Mean / median of both the data sets."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The confidence level should indicate the likelihood that much more and better data would still be able to exclude the null hypothesis on the same side.The concept of a null hypothesis is used differently in two approaches to statistical inference. In the significance testing approach of Ronald Fisher, a null hypothesis is rejected if the observed data is significantly unlikely to have occurred if the null hypothesis were true. In this case, the null hypothesis is rejected and an alternative hypothesis is accepted in its place."}, {"text": "Decide to either reject the null hypothesis in favor of the alternative or not reject it. The decision rule is to reject the null hypothesis H0 if the observed value tobs is in the critical region, and to accept or \"fail to reject\" the hypothesis otherwise.A common alternative formulation of this process goes as follows:"}, {"text": "Decide to either reject the null hypothesis in favor of the alternative or not reject it. The decision rule is to reject the null hypothesis H0 if the observed value tobs is in the critical region, and to accept or \"fail to reject\" the hypothesis otherwise.A common alternative formulation of this process goes as follows:"}, {"text": "Decide to either reject the null hypothesis in favor of the alternative or not reject it. The decision rule is to reject the null hypothesis H0 if the observed value tobs is in the critical region, and to accept or \"fail to reject\" the hypothesis otherwise.A common alternative formulation of this process goes as follows:"}, {"text": "Decide to either reject the null hypothesis in favor of the alternative or not reject it. The decision rule is to reject the null hypothesis H0 if the observed value tobs is in the critical region, and to accept or \"fail to reject\" the hypothesis otherwise.A common alternative formulation of this process goes as follows:"}, {"text": "Decide to either reject the null hypothesis in favor of the alternative or not reject it. The decision rule is to reject the null hypothesis H0 if the observed value tobs is in the critical region, and to accept or \"fail to reject\" the hypothesis otherwise.A common alternative formulation of this process goes as follows:"}, {"text": "Decide to either reject the null hypothesis in favor of the alternative or not reject it. The decision rule is to reject the null hypothesis H0 if the observed value tobs is in the critical region, and to accept or \"fail to reject\" the hypothesis otherwise.A common alternative formulation of this process goes as follows:"}]}, {"question": "How can machine learning overcome bias", "positive_ctxs": [{"text": "Three keys to managing bias when building AIChoose the right learning model for the problem. There's a reason all AI models are unique: Each problem requires a different solution and provides varying data resources. Choose a representative training data set. Monitor performance using real data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": ", and thus introducing some bias to reduce variance. Furthermore, it is not uncommon in machine learning to have cases where"}, {"text": "Inductive bias occurs within the field of machine learning. In machine learning one seeks to develop algorithms that are able to learn to anticipate a particular output. To accomplish this, the learning algorithm is given training cases that show the expected connection."}, {"text": "A drawback of MEMMs is that they potentially suffer from the \"label bias problem,\" where states with low-entropy transition distributions \"effectively ignore their observations.\" Conditional random fields were designed to overcome this weakness,"}, {"text": "Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased data can result in skewed or undesired predictions. Algorithmic bias is a potential result from data not fully prepared for training."}, {"text": "Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased data can result in skewed or undesired predictions. Algorithmic bias is a potential result from data not fully prepared for training."}, {"text": "Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased data can result in skewed or undesired predictions. Algorithmic bias is a potential result from data not fully prepared for training."}, {"text": "Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased data can result in skewed or undesired predictions. Algorithmic bias is a potential result from data not fully prepared for training."}]}, {"question": "What is the region of rejection", "positive_ctxs": [{"text": "For a hypothesis test, a researcher collects sample data. If the statistic falls within a specified range of values, the researcher rejects the null hypothesis . The range of values that leads the researcher to reject the null hypothesis is called the region of rejection."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The use of a one-tailed test is dependent on whether the research question or alternative hypothesis specifies a direction such as whether a group of objects is heavier or the performance of students on an assessment is better. A two-tailed test may still be used but it will be less powerful than a one-tailed test, because the rejection region for a one-tailed test is concentrated on one end of the null distribution and is twice the size (5% vs. 2.5%) of each rejection region for a two-tailed test. As a result, the null hypothesis can be rejected with a less extreme result if a one-tailed test was used."}, {"text": "The use of a one-tailed test is dependent on whether the research question or alternative hypothesis specifies a direction such as whether a group of objects is heavier or the performance of students on an assessment is better. A two-tailed test may still be used but it will be less powerful than a one-tailed test, because the rejection region for a one-tailed test is concentrated on one end of the null distribution and is twice the size (5% vs. 2.5%) of each rejection region for a two-tailed test. As a result, the null hypothesis can be rejected with a less extreme result if a one-tailed test was used."}, {"text": "is set to 5%, the conditional probability of a type I error, given that the null hypothesis is true, is 5%, and a statistically significant result is one where the observed p-value is less than (or equal to) 5%. When drawing data from a sample, this means that the rejection region comprises 5% of the sampling distribution. These 5% can be allocated to one side of the sampling distribution, as in a one-tailed test, or partitioned to both sides of the distribution, as in a two-tailed test, with each tail (or rejection region) containing 2.5% of the distribution."}, {"text": "is set to 5%, the conditional probability of a type I error, given that the null hypothesis is true, is 5%, and a statistically significant result is one where the observed p-value is less than (or equal to) 5%. When drawing data from a sample, this means that the rejection region comprises 5% of the sampling distribution. These 5% can be allocated to one side of the sampling distribution, as in a one-tailed test, or partitioned to both sides of the distribution, as in a two-tailed test, with each tail (or rejection region) containing 2.5% of the distribution."}, {"text": "First, a data set's average is determined. Next the absolute deviation between each data point and the average are determined. Thirdly, a rejection region is determined using the formula:"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "The standard approach is to test a null hypothesis against an alternative hypothesis. A critical region is the set of values of the estimator that leads to refuting the null hypothesis. The probability of type I error is therefore the probability that the estimator belongs to the critical region given that null hypothesis is true (statistical significance) and the probability of type II error is the probability that the estimator doesn't belong to the critical region given that the alternative hypothesis is true."}]}, {"question": "What is statistics and its purpose", "positive_ctxs": [{"text": "The Purpose of Statistics: Statistics teaches people to use a limited sample to make intelligent and accurate conclusions about a greater population. The use of tables, graphs, and charts play a vital role in presenting the data being used to draw these conclusions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What emerges then is that info-gap theory is yet to explain in what way, if any, it actually attempts to deal with the severity of the uncertainty under consideration. Subsequent sections of this article will address this severity issue and its methodological and practical implications."}, {"text": "where the h[\u2022] sequence is the impulse response, and K is its length. x[\u2022] represents the input sequence being downsampled. In a general purpose processor, after computing y[n], the easiest way to compute y[n+1] is to advance the starting index in the x[\u2022] array by M, and recompute the dot product."}, {"text": "The purpose of control charts is to allow simple detection of events that are indicative of actual process change. This simple decision can be difficult where the process characteristic is continuously varying; the control chart provides statistically objective criteria of change. When change is detected and considered good its cause should be identified and possibly become the new way of working, where the change is bad then its cause should be identified and eliminated."}, {"text": "In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference."}, {"text": "Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being distribution-free or having a specified distribution but with the distribution's parameters unspecified. Nonparametric statistics includes both descriptive statistics and statistical inference."}]}, {"question": "How do you calculate similarity", "positive_ctxs": [{"text": "To calculate the similarity between two examples, you need to combine all the feature data for those two examples into a single numeric value. For instance, consider a shoe data set with only one feature: shoe size. You can quantify how similar two shoes are by calculating the difference between their sizes."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "To calculate decimal odds, you can use the equation Return = Initial Wager x Decimal Value. For example, if you bet \u20ac100 on Liverpool to beat Manchester City at 2.00 odds you would win \u20ac200 (\u20ac100 x 2.00). Decimal odds are favoured by betting exchanges because they are the easiest to work with for trading, as they reflect the inverse of the probability of an outcome."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "How do you normalize data in statistics", "positive_ctxs": [{"text": "Some of the more common ways to normalize data include:Transforming data using a z-score or t-score. Rescaling data to have values between 0 and 1. Standardizing residuals: Ratios used in regression analysis can force residuals into the shape of a normal distribution.Normalizing Moments using the formula \u03bc/\u03c3.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the autocovariance function to get a time-dependent Pearson correlation coefficient. However, in other disciplines (e.g."}, {"text": "It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the autocovariance function to get a time-dependent Pearson correlation coefficient. However, in other disciplines (e.g."}, {"text": "It is a common practice to use a one-tailed hypothesis by default. However, \"If you do not have a specific direction firmly in mind in advance, use a two-sided alternative. Moreover, some users of statistics argue that we should always work with the two-sided alternative."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}]}, {"question": "What do you report in a multiple regression to say whether the variables are significant or not", "positive_ctxs": [{"text": "If your regression model contains independent variables that are statistically significant, a reasonably high R-squared value makes sense. Correspondingly, the good R-squared value signifies that your model explains a good proportion of the variability in the dependent variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Multicollinearity refers to a situation in which more than two explanatory variables in a multiple regression model are highly linearly related. We have perfect multicollinearity if, for example as in the equation above, the correlation"}, {"text": "For example, in 2005 the price of a gallon of gasoline in Saudi Arabia was US$0.91, and in Norway the price was US$6.27. The significant differences in price would not contribute to accuracy in a PPP analysis, despite all of the variables that contribute to the significant differences in price. More comparisons have to be made and used as variables in the overall formulation of the PPP."}, {"text": "The other variables will be part of a classification or a regression model used to classify or to predict data. These methods are particularly effective in computation time and robust to overfitting.Filter methods tend to select redundant variables when they do not consider the relationships between variables. However, more elaborate features try to minimize this problem by removing variables highly correlated to each other, such as the FCBF algorithm."}, {"text": "The other variables will be part of a classification or a regression model used to classify or to predict data. These methods are particularly effective in computation time and robust to overfitting.Filter methods tend to select redundant variables when they do not consider the relationships between variables. However, more elaborate features try to minimize this problem by removing variables highly correlated to each other, such as the FCBF algorithm."}, {"text": "Differences in the typical values across the dataset might initially be dealt with by constructing a regression model using certain explanatory variables to relate variations in the typical value to known quantities. There should then be a later stage of analysis to examine whether the errors in the predictions from the regression behave in the same way across the dataset. Thus the question becomes one of the homogeneity of the distribution of the residuals, as the explanatory variables change."}, {"text": "This is precisely the motivation for including other right-side variables in a multiple regression; but while multiple regression gives unbiased results for the effect size, it does not give a numerical value of a measure of the strength of the relationship between the two variables of interest."}]}, {"question": "How do you tell if an event is independent or dependent", "positive_ctxs": [{"text": "Independent EventsTwo events A and B are said to be independent if the fact that one event has occurred does not affect the probability that the other event will occur.If whether or not one event occurs does affect the probability that the other event will occur, then the two events are said to be dependent."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Two random variables, X and Y, are said to be independent if any event defined in terms of X is independent of any event defined in terms of Y. Formally, they generate independent \u03c3-algebras, where two \u03c3-algebras G and H, which are subsets of F are said to be independent if any element of G is independent of any element of H."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}]}, {"question": "How many layers are there in deep learning", "positive_ctxs": [{"text": "3 layers"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "How many layers in the cerebral cortex compare to layers in an artificial neural network is not clear, nor whether every area in the cerebral cortex exhibits the same structure, but over large areas they appear similar."}, {"text": "Deep learning is a branch of machine learning that models high level abstractions in data by using a deep graph with many processing layers. According to the Universal approximation theorem, deep-ness isn't necessary for a neural network to be able to approximate arbitrary continuous functions. Even so, there are many problems that are common to shallow networks (such as overfitting) that deep networks help avoid."}, {"text": "Deep learning is a branch of machine learning that models high level abstractions in data by using a deep graph with many processing layers. According to the Universal approximation theorem, deep-ness isn't necessary for a neural network to be able to approximate arbitrary continuous functions. Even so, there are many problems that are common to shallow networks (such as overfitting) that deep networks help avoid."}, {"text": "The observation that DBNs can be trained greedily, one layer at a time, led to one of the first effective deep learning algorithms. Overall, there are many attractive implementations and uses of DBNs in real-life applications and scenarios (e.g., electroencephalography, drug discovery)."}, {"text": "Skipping effectively simplifies the network, using fewer layers in the initial training stages. This speeds learning by reducing the impact of vanishing gradients, as there are fewer layers to propagate through. The network then gradually restores the skipped layers as it learns the feature space."}, {"text": "Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition."}, {"text": "Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition."}]}, {"question": "Can neural networks be used for optimization", "positive_ctxs": [{"text": "There is a broad range of opportunities to study optimization problems that cannot be solved with an exact algorithm. This work proposes the use of neural networks such as heuristics to resolve optimization problems in those cases where the use of linear programming or Lagrange multipliers is not feasible."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In the late 1980s, when research on neural networks regained strength, neurons with more continuous shapes started to be considered. The possibility of differentiating the activation function allows the direct use of the gradient descent and other optimization algorithms for the adjustment of the weights. Neural networks also started to be used as a general function approximation model."}, {"text": "Development environments for neural networks differ from the software described above primarily on two accounts \u2013 they can be used to develop custom types of neural networks and they support deployment of the neural network outside the environment. In some cases they have advanced preprocessing, analysis and visualization capabilities."}, {"text": "Neural networks can be used in different fields. The tasks to which artificial neural networks are applied tend to fall within the following broad categories:"}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}]}, {"question": "How do you implement a decision tree", "positive_ctxs": [{"text": "While implementing the decision tree we will go through the following two phases:Building Phase. Preprocess the dataset. Split the dataset from train and test using Python sklearn package. Train the classifier.Operational Phase. Make predictions. Calculate the accuracy."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}]}, {"question": "What is exponential distribution example", "positive_ctxs": [{"text": "For example, the amount of time (beginning now) until an earthquake occurs has an exponential distribution. Other examples include the length of time, in minutes, of long distance business telephone calls, and the amount of time, in months, a car battery lasts."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The distribution of failure times is over-laid with a curve representing an exponential distribution. For this example, the exponential distribution approximates the distribution of failure times. The exponential curve is a theoretical distribution fitted to the actual failure times."}, {"text": "The conjugate prior for the exponential distribution is the gamma distribution (of which the exponential distribution is a special case). The following parameterization of the gamma probability density function is useful:"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "simplifies to the exponential distribution. It is a special case of the gamma distribution. It is the distribution of a sum of"}, {"text": "The Erlang distribution is the distribution of the sum of k independent and identically distributed random variables, each having an exponential distribution. The long-run rate at which events occur is the reciprocal of the expectation of"}, {"text": "Simple exponential smoothing does not do well when there is a trend in the data, which is inconvenient. In such situations, several methods were devised under the name \"double exponential smoothing\" or \"second-order exponential smoothing,\" which is the recursive application of an exponential filter twice, thus being termed \"double exponential smoothing\". This nomenclature is similar to quadruple exponential smoothing, which also references its recursion depth."}, {"text": "are not restricted to neighbors. Note that this generalization of Ising model is sometimes called the quadratic exponential binary distribution in statistics."}]}, {"question": "What is topic Modelling used for", "positive_ctxs": [{"text": "In machine learning and natural language processing, a topic model is a type of statistical model for discovering the abstract \"topics\" that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for discovery of hidden semantic structures in a text body."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Also, Multi-agent Systems Artificial Intelligence (MAAI) are used for simulating societies, the purpose thereof being helpful in the fields of climate, energy, epidemiology, conflict management, child abuse, .... Some organisations working on using multi-agent system models include Center for Modelling Social Systems, Centre for Research in Social Simulation, Centre for Policy Modelling, Society for Modelling and Simulation International."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "What is the underlying framework used to represent knowledge? Semantic networks were one of the first knowledge representation primitives. Also, data structures and algorithms for general fast search."}, {"text": "Latent variable models are statistical models where in addition to the observed variables, a set of latent variables also exists which is not observed. A highly practical example of latent variable models in machine learning is the topic modeling which is a statistical model for generating the words (observed variables) in the document based on the topic (latent variable) of the document. In the topic modeling, the words in the document are generated according to different statistical parameters when the topic of the document is changed."}, {"text": "Latent variable models are statistical models where in addition to the observed variables, a set of latent variables also exists which is not observed. A highly practical example of latent variable models in machine learning is the topic modeling which is a statistical model for generating the words (observed variables) in the document based on the topic (latent variable) of the document. In the topic modeling, the words in the document are generated according to different statistical parameters when the topic of the document is changed."}]}, {"question": "Can Knn have linear decision boundary", "positive_ctxs": [{"text": "Because the distance function used to find the k nearest neighbors is not linear, so it usually won't lead to a linear decision boundary. kNN does not build a model of your data, it simply assumes that instances that are close together in space are similar."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Nearest neighbor rules in effect implicitly compute the decision boundary. It is also possible to compute the decision boundary explicitly, and to do so efficiently, so that the computational complexity is a function of the boundary complexity."}, {"text": "Nearest neighbor rules in effect implicitly compute the decision boundary. It is also possible to compute the decision boundary explicitly, and to do so efficiently, so that the computational complexity is a function of the boundary complexity."}, {"text": "Nearest neighbor rules in effect implicitly compute the decision boundary. It is also possible to compute the decision boundary explicitly, and to do so efficiently, so that the computational complexity is a function of the boundary complexity."}, {"text": "Nearest neighbor rules in effect implicitly compute the decision boundary. It is also possible to compute the decision boundary explicitly, and to do so efficiently, so that the computational complexity is a function of the boundary complexity."}, {"text": "Nearest neighbor rules in effect implicitly compute the decision boundary. It is also possible to compute the decision boundary explicitly, and to do so efficiently, so that the computational complexity is a function of the boundary complexity."}, {"text": "In a statistical-classification problem with two classes, a decision boundary or decision surface is a hypersurface that partitions the underlying vector space into two sets, one for each class. The classifier will classify all the points on one side of the decision boundary as belonging to one class and all those on the other side as belonging to the other class."}, {"text": ".The algorithm can be understood as selecting samples that surprises the pilot model. Intuitively these samples are closer to the decision boundary of the classifier and is thus more informative."}]}, {"question": "What is significant about Alpha Go Zero", "positive_ctxs": [{"text": "AlphaGo Zero is a version of DeepMind's Go software AlphaGo. By playing games against itself, AlphaGo Zero surpassed the strength of AlphaGo Lee in three days by winning 100 games to 0, reached the level of AlphaGo Master in 21 days, and exceeded all the old versions in 40 days."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Only four TPUs were used for inference. The neural network initially knew nothing about Go beyond the rules. Unlike earlier versions of AlphaGo, Zero only perceived the board's stones, rather than having some rare human-programmed edge cases to help recognize unusual Go board positions."}, {"text": "The self-taught AlphaGo Zero achieved a 100\u20130 victory against the early competitive version of AlphaGo, and its successor AlphaZero is currently perceived as the world's top player in Go as well as possibly in chess."}, {"text": "AlphaGo Zero is a version of DeepMind's Go software AlphaGo. AlphaGo's team published an article in the journal Nature on 19 October 2017, introducing AlphaGo Zero, a version created without using data from human games, and stronger than any previous version. By playing games against itself, AlphaGo Zero surpassed the strength of AlphaGo Lee in three days by winning 100 games to 0, reached the level of AlphaGo Master in 21 days, and exceeded all the old versions in 40 days.Training artificial intelligence (AI) without datasets derived from human experts has significant implications for the development of AI with superhuman skills because expert data is \"often expensive, unreliable or simply unavailable.\""}, {"text": "Mark Pesce of the University of Sydney called AlphaGo Zero \"a big technological advance\" taking us into \"undiscovered territory\".Gary Marcus, a psychologist at New York University, has cautioned that for all we know, AlphaGo may contain \"implicit knowledge that the programmers have about how to construct machines to play problems like Go\" and will need to be tested in other domains before being sure that its base architecture is effective at much more than playing Go. In contrast, DeepMind is \"confident that this approach is generalisable to a large number of domains\".In response to the reports, South Korean Go professional Lee Sedol said, \"The previous version of AlphaGo wasn\u2019t perfect, and I believe that\u2019s why AlphaGo Zero was made.\""}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie, who at the time continuously held the world No. 1 ranking for two years. This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is a relatively complex game, more so than Chess."}]}, {"question": "What is the difference between discrete and continuous distribution", "positive_ctxs": [{"text": "Control Charts: A discrete distribution is one in which the data can only take on certain values, for example integers. A continuous distribution is one in which data can take on any value within a specified range (which may be infinite)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The raison d'\u00eatre of the measure-theoretic treatment of probability is that it unifies the discrete and the continuous cases, and makes the difference a question of which measure is used. Furthermore, it covers distributions that are neither discrete nor continuous nor mixtures of the two."}, {"text": "Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part."}, {"text": "Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part."}, {"text": "Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part."}, {"text": "Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part."}, {"text": "Another difference is that the threshold is constant. The model SRM0 can be formulated in discrete or continuous time. For example, in continuous time, the single-neuron equation is"}, {"text": "Another useful measure of entropy that works equally well in the discrete and the continuous case is the relative entropy of a distribution. It is defined as the Kullback\u2013Leibler divergence from the distribution to a reference measure m as follows. Assume that a probability distribution p is absolutely continuous with respect to a measure m, i.e."}]}, {"question": "What is the relationship between language and thought", "positive_ctxs": [{"text": "The bits of linguistic information that enter into one person's mind, from another, cause people to entertain a new thought with profound effects on his world knowledge, inferencing, and subsequent behavior. Language neither creates nor distorts conceptual life. Thought comes first, while language is an expression."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Neural coding is concerned with how sensory and other information is represented in the brain by neurons. The main goal of studying neural coding is to characterize the relationship between the stimulus and the individual or ensemble neuronal responses, and the relationships among the electrical activities of the neurons within the ensemble. It is thought that neurons can encode both digital and analog information."}, {"text": "However, a better experiment is to compute the natural direct effect. (NDE) This is the effect determined by leaving the relationship between X and M untouched while intervening on the relationship between X and Y."}, {"text": "Suppose the results show that the hare ran faster than the tortoise in 90 of the 100 sample pairs; in that case, the sample common language effect size is 90%. This sample value is an unbiased estimator of the population value, so the sample suggests that the best estimate of the common language effect size in the population is 90%.The relationship between f and the Mann\u2013Whitney U (specifically"}, {"text": "Suppose the results show that the hare ran faster than the tortoise in 90 of the 100 sample pairs; in that case, the sample common language effect size is 90%. This sample value is an unbiased estimator of the population value, so the sample suggests that the best estimate of the common language effect size in the population is 90%.The relationship between f and the Mann\u2013Whitney U (specifically"}, {"text": "There is no connection between A and B; the correlation is a coincidence.Thus there can be no conclusion made regarding the existence or the direction of a cause-and-effect relationship only from the fact that A and B are correlated. Determining whether there is an actual cause-and-effect relationship requires further investigation, even when the relationship between A and B is statistically significant, a large effect size is observed, or a large part of the variance is explained."}, {"text": "There is no connection between A and B; the correlation is a coincidence.Thus there can be no conclusion made regarding the existence or the direction of a cause-and-effect relationship only from the fact that A and B are correlated. Determining whether there is an actual cause-and-effect relationship requires further investigation, even when the relationship between A and B is statistically significant, a large effect size is observed, or a large part of the variance is explained."}, {"text": "Intuitively, ANCOVA can be thought of as 'adjusting' the DV by the group means of the CV(s).The ANCOVA model assumes a linear relationship between the response (DV) and covariate (CV):"}]}, {"question": "Is Machine Learning Biased", "positive_ctxs": [{"text": "Machine learning, a subset of artificial intelligence (AI), depends on the quality, objectivity and size of training data used to teach it. Machine learning bias generally stems from problems introduced by the individuals who design and/or train the machine learning systems."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Bifet, Albert; Gavald\u00e0, Ricard; Holmes, Geoff; Pfahringer, Bernhard (2018). Machine Learning for Data Streams with Practical Examples in MOA. Adaptive Computation and Machine Learning."}, {"text": "In February 2017, IBM announced the first Machine Learning Hub in Silicon Valley to share expertise and teach companies about machine learning and data science In April 2017 they expanded to Toronto, Beijing, and Stuttgart. A fifth Machine Learning Hub was created in August 2017 in India, Bongalore."}, {"text": "Gers, Felix A.; Schraudolph, Nicol N.; Schmidhuber, J\u00fcrgen (Aug 2002). \"Learning precise timing with LSTM recurrent networks\" (PDF). Journal of Machine Learning Research."}, {"text": "The following tree was constructed using JBoost on the spambase dataset (available from the UCI Machine Learning Repository). In this example, spam is coded as 1 and regular email is coded as \u22121."}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "Ioffe, Sergey; Szegedy, Christian (2015). \"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift\", ICML'15: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, July 2015 Pages 448\u2013456"}]}, {"question": "Which devices support TensorFlow Lite for inference", "positive_ctxs": [{"text": "TensorFlow Lite inferenceAndroid Platform.iOS Platform.Linux Platform."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In May 2017, Google announced a software stack specifically for mobile development, TensorFlow Lite. In January 2019, TensorFlow team released a developer preview of the mobile GPU inference engine with OpenGL ES 3.1 Compute Shaders on Android devices and Metal Compute Shaders on iOS devices. In May 2019, Google announced that their TensorFlow Lite Micro (also known as TensorFlow Lite for Microcontrollers) and ARM's uTensor would be merging.TensorFlow Lite uses FlatBuffers as the data serialization format for network models, eschewing the Protocol Buffers format used by standard TensorFlow models."}, {"text": "In May 2017, Google announced a software stack specifically for mobile development, TensorFlow Lite. In January 2019, TensorFlow team released a developer preview of the mobile GPU inference engine with OpenGL ES 3.1 Compute Shaders on Android devices and Metal Compute Shaders on iOS devices. In May 2019, Google announced that their TensorFlow Lite Micro (also known as TensorFlow Lite for Microcontrollers) and ARM's uTensor would be merging.TensorFlow Lite uses FlatBuffers as the data serialization format for network models, eschewing the Protocol Buffers format used by standard TensorFlow models."}, {"text": "In July 2018, the Edge TPU was announced. Edge TPU is Google's purpose-built ASIC chip designed to run TensorFlow Lite machine learning (ML) models on small client computing devices such as smartphones known as edge computing."}, {"text": "In July 2018, the Edge TPU was announced. Edge TPU is Google's purpose-built ASIC chip designed to run TensorFlow Lite machine learning (ML) models on small client computing devices such as smartphones known as edge computing."}, {"text": "Algorithmic inference gathers new developments in the statistical inference methods made feasible by the powerful computing devices widely available to any data analyst. Cornerstones in this field are computational learning theory, granular computing, bioinformatics, and, long ago, structural probability (Fraser 1966)."}, {"text": "In March 2018, Google announced TensorFlow.js version 1.0 for machine learning in JavaScript.In Jan 2019, Google announced TensorFlow 2.0. It became officially available in Sep 2019.In May 2019, Google announced TensorFlow Graphics for deep learning in computer graphics."}, {"text": "In March 2018, Google announced TensorFlow.js version 1.0 for machine learning in JavaScript.In Jan 2019, Google announced TensorFlow 2.0. It became officially available in Sep 2019.In May 2019, Google announced TensorFlow Graphics for deep learning in computer graphics."}]}, {"question": "Who created the law of averages", "positive_ctxs": [{"text": "Jakob Bernoulli"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Another application of the law of averages is a belief that a sample's behaviour must line up with the expected value based on population statistics. For example, suppose a fair coin is flipped 100 times. Using the law of averages, one might predict that there will be 50 heads and 50 tails."}, {"text": "Another application of the law of averages is a belief that a sample's behaviour must line up with the expected value based on population statistics. For example, suppose a fair coin is flipped 100 times. Using the law of averages, one might predict that there will be 50 heads and 50 tails."}, {"text": "The gambler's fallacy is a particular misapplication of the law of averages in which the gambler believes that a particular outcome is more likely because it has not happened recently, or (conversely) that because a particular outcome has recently occurred, it will be less likely in the immediate future.As an example, consider a roulette wheel that has landed on red in three consecutive spins. An onlooker might apply the law of averages to conclude that on its next spin it must (or at least is much more likely to) land on black. Of course, the wheel has no memory and its probabilities do not change according to past results."}, {"text": "The gambler's fallacy is a particular misapplication of the law of averages in which the gambler believes that a particular outcome is more likely because it has not happened recently, or (conversely) that because a particular outcome has recently occurred, it will be less likely in the immediate future.As an example, consider a roulette wheel that has landed on red in three consecutive spins. An onlooker might apply the law of averages to conclude that on its next spin it must (or at least is much more likely to) land on black. Of course, the wheel has no memory and its probabilities do not change according to past results."}, {"text": "A more recent attempt in edge-enhancing smoothing was also proposed by J. E. Kyprianidis. The filter's output is a weighed sum of the local averages with more weight given the averages of more homogenous regions."}, {"text": "In this example, one tries to increase the probability of a rare event occurring at least once by carrying out more trials. For example, a job seeker might argue, \"If I send my r\u00e9sum\u00e9 to enough places, the law of averages says that someone will eventually hire me.\" Assuming a non-zero probability, it is true that conducting more trials increases the overall likelihood of the desired outcome."}, {"text": "In this example, one tries to increase the probability of a rare event occurring at least once by carrying out more trials. For example, a job seeker might argue, \"If I send my r\u00e9sum\u00e9 to enough places, the law of averages says that someone will eventually hire me.\" Assuming a non-zero probability, it is true that conducting more trials increases the overall likelihood of the desired outcome."}]}, {"question": "What is a regression model example", "positive_ctxs": [{"text": "Simple regression analysis uses a single x variable for each dependent \u201cy\u201d variable. For example: (x1, Y1). Multiple regression uses multiple \u201cx\u201d variables for each independent variable: (x1)1, (x2)1, (x3)1, Y1)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "To minimize MSE, the model could be more accurate, which would mean the model is closer to actual data. One example of a linear regression using this method is the least squares method\u2014which evaluates appropriateness of linear regression model to model bivariate dataset, but whose the limitation is related to known distribution of the data."}, {"text": "To minimize MSE, the model could be more accurate, which would mean the model is closer to actual data. One example of a linear regression using this method is the least squares method\u2014which evaluates appropriateness of linear regression model to model bivariate dataset, but whose the limitation is related to known distribution of the data."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In statistics, Poisson regression is a generalized linear model form of regression analysis used to model count data and contingency tables. Poisson regression assumes the response variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. A Poisson regression model is sometimes known as a log-linear model, especially when used to model contingency tables."}, {"text": "Logistic regression is a statistical model that in its basic form uses a logistic function to model a binary dependent variable, although many more complex extensions exist. In regression analysis, logistic regression (or logit regression) is estimating the parameters of a logistic model (a form of binary regression). Mathematically, a binary logistic model has a dependent variable with two possible values, such as pass/fail which is represented by an indicator variable, where the two values are labeled \"0\" and \"1\"."}]}, {"question": "What are the challenges in training a neural network", "positive_ctxs": [{"text": "Training deep learning neural networks is very challenging. The best general algorithm known for solving this problem is stochastic gradient descent, where model weights are updated each iteration using the backpropagation of error algorithm. Optimization in general is an extremely difficult task."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence."}, {"text": "Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence."}, {"text": "Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence."}, {"text": "Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence."}, {"text": "Training the weights in a neural network can be modeled as a non-linear global optimization problem. A target function can be formed to evaluate the fitness or error of a particular weight vector as follows: First, the weights in the network are set according to the weight vector. Next, the network is evaluated against the training sequence."}, {"text": "(2017) proposed elastic weight consolidation (EWC), a method to sequentially train a single artificial neural network on multiple tasks. This technique supposes that some weights of the trained neural network are more important for previously learned tasks than others. During training of the neural network on a new task, changes to the weights of the network are made less likely the greater their importance."}, {"text": "Computational learning theory is concerned with training classifiers on a limited amount of data. In the context of neural networks a simple heuristic, called early stopping, often ensures that the network will generalize well to examples not in the training set."}]}, {"question": "What are robust regressions and robust statistics", "positive_ctxs": [{"text": "In robust statistics, robust regression is a form of regression analysis designed to overcome some limitations of traditional parametric and non-parametric methods. Regression analysis seeks to find the relationship between one or more independent variables and a dependent variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Although uptake of robust methods has been slow, modern mainstream statistics text books often include discussion of these methods (for example, the books by Seber and Lee, and by Faraway; for a good general description of how the various robust regression methods developed from one another see Andersen's book). Also, modern statistical software packages such as R, Statsmodels, Stata and S-PLUS include considerable functionality for robust estimation (see, for example, the books by Venables and Ripley, and by Maronna et al."}, {"text": "The sample mean and sample covariance are not robust statistics, meaning that they are sensitive to outliers. As robustness is often a desired trait, particularly in real-world applications, robust alternatives may prove desirable, notably quantile-based statistics such as the sample median for location, and interquartile range (IQR) for dispersion. Other alternatives include trimming and Winsorising, as in the trimmed mean and the Winsorized mean."}, {"text": "Tukey's EDA was related to two other developments in statistical theory: robust statistics and nonparametric statistics, both of which tried to reduce the sensitivity of statistical inferences to errors in formulating statistical models. Tukey promoted the use of five number summary of numerical data\u2014the two extremes (maximum and minimum), the median, and the quartiles\u2014because these median and quartiles, being functions of the empirical distribution are defined for all distributions, unlike the mean and standard deviation; moreover, the quartiles and median are more robust to skewed or heavy-tailed distributions than traditional summaries (the mean and standard deviation). The packages S, S-PLUS, and R included routines using resampling statistics, such as Quenouille and Tukey's jackknife and Efron's bootstrap, which are nonparametric and robust (for many problems)."}, {"text": "Thus, to have a clear picture of info-gap's modus operandi and its role and place in decision theory and robust optimization, it is imperative to examine it within this context. In other words, it is necessary to establish info-gap's relation to classical decision theory and robust optimization."}, {"text": "In robust statistics, robust regression is a form of regression analysis designed to overcome some limitations of traditional parametric and non-parametric methods. Regression analysis seeks to find the relationship between one or more independent variables and a dependent variable. Certain widely used methods of regression, such as ordinary least squares, have favourable properties if their underlying assumptions are true, but can give misleading results if those assumptions are not true; thus ordinary least squares is said to be not robust to violations of its assumptions."}, {"text": "Convex hulls have wide applications in many fields. Within mathematics, convex hulls are used to study polynomials, matrix eigenvalues, and unitary elements, and several theorems in discrete geometry involve convex hulls. They are used in robust statistics as the outermost contour of Tukey depth, are part of the bagplot visualization of two-dimensional data, and define risk sets of randomized decision rules."}, {"text": "For univariate distributions that are symmetric about one median, the Hodges\u2013Lehmann estimator is a robust and highly efficient estimator of the population median; for non-symmetric distributions, the Hodges\u2013Lehmann estimator is a robust and highly efficient estimator of the population pseudo-median, which is the median of a symmetrized distribution and which is close to the population median. The Hodges\u2013Lehmann estimator has been generalized to multivariate distributions."}]}, {"question": "What is multivariate variable", "positive_ctxs": [{"text": "The term \u201cmultivariate statistics\u201d is appropriately used to include all statistics where there are more than two variables simultaneously analyzed. You are already familiar with bivariate statistics such as the Pearson product moment correlation coefficient and the independent groups t-test."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "In statistics, a latent class model (LCM) relates a set of observed (usually discrete) multivariate variables to a set of latent variables. It is a type of latent variable model. It is called a latent class model because the latent variable is discrete."}, {"text": "In statistics, a latent class model (LCM) relates a set of observed (usually discrete) multivariate variables to a set of latent variables. It is a type of latent variable model. It is called a latent class model because the latent variable is discrete."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}, {"text": "In an experiment, the variable manipulated by an experimenter is something that is proven to work called an independent variable. The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data."}]}, {"question": "What does regression analysis tell you", "positive_ctxs": [{"text": "Use regression analysis to describe the relationships between a set of independent variables and the dependent variable. Regression analysis produces a regression equation where the coefficients represent the relationship between each independent variable and the dependent variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What is more there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.A simple example of modus ponens often used in introductory logic books is \"If you are human then you are mortal\". This can be represented in pseudocode as:"}, {"text": "You are allowed to select k of these n boxes all at once and break them open simultaneously, gaining access to k keys. What is the probability that using these keys you can open all n boxes, where you use a found key to open the box it belongs to and repeat."}, {"text": "Logic will not undertake to inform you what kind of experiments you ought to make in order best to determine the acceleration of gravity, or the value of the Ohm; but it will tell you how to proceed to form a plan of experimentation.[....] Unfortunately practice generally precedes theory, and it is the usual fate of mankind to get things done in some boggling way first, and find out afterward how they could have been done much more easily and perfectly."}, {"text": "Now, assume (for example) that there are 5 green and 45 red marbles in the urn. Standing next to the urn, you close your eyes and draw 10 marbles without replacement. What is the probability that exactly 4 of the 10 are green?"}, {"text": "But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influence a child's cognitive functioning?). Then you can still do research, but it is not causal, it is correlational."}, {"text": "Generate N random numbers from a categorical distribution of size n and probabilities pi for i= 1= to n. These tell you which of the Fi each of the N values will come from. Denote by mi the quantity of random numbers assigned to the ith category."}]}, {"question": "What are the two common hash functions", "positive_ctxs": [{"text": "The most common hash functions used in digital forensics are Message Digest 5 (MD5), and Secure Hashing Algorithm (SHA) 1 and 2."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In this example, there is a 50% probability that the hash collision cancels out. Multiple hash functions can be used to further reduce the risk of collisions.Furthermore, if \u03c6 is the transformation implemented by a hashing trick with a sign hash \u03be (i.e. \u03c6(x) is the feature vector produced for a sample x), then inner products in the hashed space are unbiased:"}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "It has been suggested that a second, single-bit output hash function \u03be be used to determine the sign of the update value, to counter the effect of hash collisions. If such a hash function is used, the algorithm becomes"}, {"text": "Composite partitioning: allows for certain combinations of the above partitioning schemes, by for example first applying a range partitioning and then a hash partitioning. Consistent hashing could be considered a composite of hash and list partitioning where the hash reduces the key space to a size that can be listed."}, {"text": "A common alternative to using dictionaries is the hashing trick, where words are mapped directly to indices with a hashing function. Thus, no memory is required to store a dictionary. Hash collisions are typically dealt via freed-up memory to increase the number of hash buckets."}, {"text": "A common alternative to using dictionaries is the hashing trick, where words are mapped directly to indices with a hashing function. Thus, no memory is required to store a dictionary. Hash collisions are typically dealt via freed-up memory to increase the number of hash buckets."}, {"text": "A common alternative to using dictionaries is the hashing trick, where words are mapped directly to indices with a hashing function. Thus, no memory is required to store a dictionary. Hash collisions are typically dealt via freed-up memory to increase the number of hash buckets."}]}, {"question": "How do you use logistic regression for multi class classification", "positive_ctxs": [{"text": "Multiclass classification with logistic regression can be done either through the one-vs-rest scheme in which for each class a binary classification problem of data belonging or not to that class is done, or changing the loss function to cross- entropy loss."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The softmax function is used in various multiclass classification methods, such as multinomial logistic regression (also known as softmax regression) [1], multiclass linear discriminant analysis, naive Bayes classifiers, and artificial neural networks. Specifically, in multinomial logistic regression and linear discriminant analysis, the input to the function is the result of K distinct linear functions, and the predicted probability for the j'th class given a sample vector x and a weighting vector w is:"}, {"text": "The softmax function is used in various multiclass classification methods, such as multinomial logistic regression (also known as softmax regression) [1], multiclass linear discriminant analysis, naive Bayes classifiers, and artificial neural networks. Specifically, in multinomial logistic regression and linear discriminant analysis, the input to the function is the result of K distinct linear functions, and the predicted probability for the j'th class given a sample vector x and a weighting vector w is:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}]}, {"question": "Why is being bias bad", "positive_ctxs": [{"text": "Bias can damage research, if the researcher chooses to allow his bias to distort the measurements and observations or their interpretation. When faculty are biased about individual students in their courses, they may grade some students more or less favorably than others, which is not fair to any of the students."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, \"bias\" is an objective property of an estimator."}, {"text": "In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, \"bias\" is an objective property of an estimator."}, {"text": "The reason is that, for finite n, BIC can have a substantial risk of selecting a very bad model from the candidate set. This reason can arise even when n is much larger than k2. With AIC, the risk of selecting a very bad model is minimized."}, {"text": "traces(P) is \"prefix-closed\": Let \u03b2 \u2208 traces(P) and \u03b2' represent a finite prefix of \u03b2 In this case, \u03b2' \u2208 traces(P). To elaborate, assume there is a trace in which nothing bad happens. Therefore, nothing bad happens in any prefix of that trace."}, {"text": "Akaike information criterion (AIC) method of model selection, and a comparison with MML: Dowe, D.L. ; Gardner, S.; Oppy, G. (Dec 2007). Why Simplicity is no Problem for Bayesians\"."}, {"text": "In research, the observer bias is a form of detection bias originating at a study\u2019s stage of observing or recording information. Different observers may assess subjective criteria differently, and cognitive biases (including preconceptions and assumptions) can affect how a subject is assessed. For example, being aware of a subject\u2019s disease status may introduce a bias in how the outcome is assessed."}, {"text": "In the ATM example, a minimal bad prefix is a finite set of steps carried out in which money is dispensed in the last step and a PIN is not entered at any step. To verify a safety property, it is sufficient to consider only the finite traces of a Kripke structure and check whether any such trace is a bad prefix.An LT property P is a safety property if and only if"}]}, {"question": "What is forward and backward chaining in AI", "positive_ctxs": [{"text": "Forward chaining starts from known facts and applies inference rule to extract more data unit it reaches to the goal. Backward chaining starts from the goal and works backward through inference rules to find the required facts that support the goal. Backward chaining reasoning applies a depth-first search strategy."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Because the data determines which rules are selected and used, this method is called data-driven, in contrast to goal-driven backward chaining inference. The forward chaining approach is often employed by expert systems, such as CLIPS."}, {"text": "Forward chaining (or forward reasoning) is one of the two main methods of reasoning when using an inference engine and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systems, business and production rule systems. The opposite of forward chaining is backward chaining."}, {"text": "Inference engines work primarily in one of two modes either special rule or facts: forward chaining and backward chaining. Forward chaining starts with the known facts and asserts new facts. Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved."}, {"text": "Backward chaining is a bit less straight forward. In backward chaining the system looks at possible conclusions and works backward to see if they might be true. So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true."}, {"text": "Backward chaining is a bit less straight forward. In backward chaining the system looks at possible conclusions and works backward to see if they might be true. So if the system was trying to determine if Mortal(Socrates) is true it would find R1 and query the knowledge base to see if Man(Socrates) is true."}, {"text": "Because the list of goals determines which rules are selected and used, this method is called goal-driven, in contrast to data-driven forward-chaining inference. The backward chaining approach is often employed by expert systems."}, {"text": "One of the first and most popular forward chaining engines was OPS5 which used the Rete algorithm to optimize the efficiency of rule firing. Another very popular technology that was developed was the Prolog logic programming language. Prolog focused primarily on backward chaining and also featured various commercial versions and optimizations for efficiency and robustness.As Expert Systems prompted significant interest from the business world various companies, many of them started or guided by prominent AI researchers created productized versions of inference engines."}]}, {"question": "What is random error and how can it be reduced", "positive_ctxs": [{"text": "Random error can be reduced by: Using an average measurement from a set of measurements, or. Increasing sample size."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Random error which may vary from observation to another.Systematic error is sometimes called statistical bias. It may often be reduced with standardized procedures. Part of the learning process in the various sciences is learning how to use standard instruments and protocols so as to minimize systematic error."}, {"text": "Random error which may vary from observation to another.Systematic error is sometimes called statistical bias. It may often be reduced with standardized procedures. Part of the learning process in the various sciences is learning how to use standard instruments and protocols so as to minimize systematic error."}, {"text": "Random error which may vary from observation to another.Systematic error is sometimes called statistical bias. It may often be reduced with standardized procedures. Part of the learning process in the various sciences is learning how to use standard instruments and protocols so as to minimize systematic error."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Sometimes, it is useful to study the opposite question and ask how often the random variable is above a particular level. This is called the complementary cumulative distribution function (ccdf) or simply the tail distribution or exceedance, and is defined as"}, {"text": "Endsley (2017) describes how high system reliability can lead users to disengage from monitoring systems, thereby increasing monitoring errors, decreasing situational awareness, and interfering with an operator's ability to re-assume control of the system in the event performance limitations have been exceeded. This complacency can be sharply reduced when automation reliability varies over time instead of remaining constant, but is not reduced by experience and practice. Both expert and inexpert participants can exhibit automation bias as well as automation complacency."}, {"text": "Measurement errors can be divided into two components: random error and systematic error.Random errors are errors in measurement that lead to measurable values being inconsistent when repeated measurements of a constant attribute or quantity are taken. Systematic errors are errors that are not determined by chance but are introduced by an inaccuracy (involving either the observation or measurement process) inherent to the system. Systematic error may also refer to an error with a non-zero mean, the effect of which is not reduced when observations are averaged."}]}, {"question": "What is an example of pattern recognition", "positive_ctxs": [{"text": "An example of pattern recognition is classification, which attempts to assign each input value to one of a given set of classes (for example, determine whether a given email is \"spam\" or \"non-spam\"). This is opposed to pattern matching algorithms, which look for exact matches in the input with pre-existing patterns."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Other typical applications of pattern recognition techniques are automatic speech recognition, speaker identification, classification of text into several categories (e.g., spam/non-spam email messages), the automatic recognition of handwriting on postal envelopes, automatic recognition of images of human faces, or handwriting image extraction from medical forms. The last two examples form the subtopic image analysis of pattern recognition that deals with digital images as input to pattern recognition systems.Optical character recognition is a classic example of the application of a pattern classifier, see OCR-example. The method of signing one's name was captured with stylus and overlay starting in 1990."}, {"text": "Other typical applications of pattern recognition techniques are automatic speech recognition, speaker identification, classification of text into several categories (e.g., spam/non-spam email messages), the automatic recognition of handwriting on postal envelopes, automatic recognition of images of human faces, or handwriting image extraction from medical forms. The last two examples form the subtopic image analysis of pattern recognition that deals with digital images as input to pattern recognition systems.Optical character recognition is a classic example of the application of a pattern classifier, see OCR-example. The method of signing one's name was captured with stylus and overlay starting in 1990."}, {"text": "In machine learning, pattern recognition is the assignment of a label to a given input value. In statistics, discriminant analysis was introduced for this same purpose in 1936. An example of pattern recognition is classification, which attempts to assign each input value to one of a given set of classes (for example, determine whether a given email is \"spam\" or \"non-spam\")."}, {"text": "In machine learning, pattern recognition is the assignment of a label to a given input value. In statistics, discriminant analysis was introduced for this same purpose in 1936. An example of pattern recognition is classification, which attempts to assign each input value to one of a given set of classes (for example, determine whether a given email is \"spam\" or \"non-spam\")."}, {"text": "Homoscedastic distributions are especially useful to derive statistical pattern recognition and machine learning algorithms. One popular example of an algorithm that assumes homoscedasticity is Fisher's linear discriminant analysis."}, {"text": "Homoscedastic distributions are especially useful to derive statistical pattern recognition and machine learning algorithms. One popular example of an algorithm that assumes homoscedasticity is Fisher's linear discriminant analysis."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}]}, {"question": "Is the softmax loss the same as the cross entropy loss", "positive_ctxs": [{"text": "In short, Softmax Loss is actually just a Softmax Activation plus a Cross-Entropy Loss. Softmax is an activation function that outputs the probability for each class and these probabilities will sum up to one. Cross Entropy loss is just the sum of the negative logarithm of the probabilities."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The cross entropy loss is closely related to the Kullback\u2013Leibler divergence between the empirical distribution and the predicted distribution. The cross entropy loss is ubiquitous in modern deep neural networks."}, {"text": "The logistic loss is sometimes called cross-entropy loss. It is also known as log loss (In this case, the binary label is often denoted by {-1,+1}).Remark: The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared error loss for Linear regression."}, {"text": "The logistic loss is sometimes called cross-entropy loss. It is also known as log loss (In this case, the binary label is often denoted by {-1,+1}).Remark: The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared error loss for Linear regression."}, {"text": "The use of a quadratic loss function is common, for example when using least squares techniques. It is often more mathematically tractable than other loss functions because of the properties of variances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target is t, then a quadratic loss function is"}, {"text": "The use of a quadratic loss function is common, for example when using least squares techniques. It is often more mathematically tractable than other loss functions because of the properties of variances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target is t, then a quadratic loss function is"}, {"text": "A benefit of the square loss function is that its structure lends itself to easy cross validation of regularization parameters. Specifically for Tikhonov regularization, one can solve for the regularization parameter using leave-one-out cross-validation in the same time as it would take to solve a single problem.The minimizer of"}, {"text": "The Pseudo-Huber loss function can be used as a smooth approximation of the Huber loss function. It combines the best properties of L2 squared loss and L1 absolute loss by being strongly convex when close to the target/minimum and less steep for extreme values. This steepness can be controlled by the"}]}, {"question": "What are filters in neural networks", "positive_ctxs": [{"text": "In Convolutional Neural Networks, Filters detect spatial patterns such as edges in an image by detecting the changes in intensity values of the image."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Ans and Rousset (1997) also proposed a two-network artificial neural architecture with memory self-refreshing that overcomes catastrophic interference when sequential learning tasks are carried out in distributed networks trained by backpropagation. The principle is to interleave, at the time when new external patterns are learned, those to-be-learned new external patterns with internally generated pseudopatterns, or 'pseudo-memories', that reflect the previously learned information. What mainly distinguishes this model from those that use classical pseudorehearsal in feedforward multilayer networks is a reverberating process that is used for generating pseudopatterns."}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}, {"text": "Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR)."}]}, {"question": "What is covariance matrix example", "positive_ctxs": [{"text": "Covariance Matrix is a measure of how much two random variables gets change together. The Covariance Matrix is also known as dispersion matrix and variance-covariance matrix. The covariance between two jointly distributed real-valued random variables X and Y with finite second moments is defined as."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "the result is a k \u00d7 k positive-semidefinite covariance matrix of rank k \u2212 1. In the special case where k = n and where the pi are all equal, the covariance matrix is the centering matrix."}, {"text": "where Z is a normalization constant, A is a symmetric positive definite matrix (inverse covariance matrix a.k.a. precision matrix) and b is the shift vector."}, {"text": "Any singular covariance matrix is pivoted so that the first diagonal partition is nonsingular and well-conditioned. The pivoting algorithm must retain any portion of the innovation covariance matrix directly corresponding to observed state-variables Hk\u00b7xk|k-1 that are associated with auxiliary observations in"}, {"text": "(See the general article on the exponential family, and consider also the Wishart distribution, conjugate prior of the covariance matrix of a multivariate normal distribution, for an example where a large dimensionality is involved.)"}, {"text": "Here, Cov(\u22c5, \u22c5) is the covariance, which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. The next expression states equivalently that the variance of the sum is the sum of the diagonal of covariance matrix plus two times the sum of its upper triangular elements (or its lower triangular elements); this emphasizes that the covariance matrix is symmetric."}, {"text": "Here, Cov(\u22c5, \u22c5) is the covariance, which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. The next expression states equivalently that the variance of the sum is the sum of the diagonal of covariance matrix plus two times the sum of its upper triangular elements (or its lower triangular elements); this emphasizes that the covariance matrix is symmetric."}, {"text": "Here, Cov(\u22c5, \u22c5) is the covariance, which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. The next expression states equivalently that the variance of the sum is the sum of the diagonal of covariance matrix plus two times the sum of its upper triangular elements (or its lower triangular elements); this emphasizes that the covariance matrix is symmetric."}]}, {"question": "What is a deconvolution layer", "positive_ctxs": [{"text": "Deconvolution layer is a very unfortunate name and should rather be called a transposed convolutional layer. Visually, for a transposed convolution with stride one and no padding, we just pad the original input (blue entries) with zeroes (white entries) (Figure 1)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What constitutes narrow or wide limits of agreement or large or small bias is a matter of a practical assessment in each case."}, {"text": "An RBM is an undirected, generative energy-based model with a \"visible\" input layer and a hidden layer and connections between but not within layers. This composition leads to a fast, layer-by-layer unsupervised training procedure, where contrastive divergence is applied to each sub-network in turn, starting from the \"lowest\" pair of layers (the lowest visible layer is a training set)."}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}]}, {"question": "What are the common aspects of swarm intelligence observed in nature", "positive_ctxs": [{"text": "Examples in natural systems of swarm intelligence include bird flocking, ant foraging, and fish schooling. Inspired by swarm's such behavior, a class of algorithms is proposed for tackling optimization problems, usually under the title of swarm intelligence algorithms (SIAs) [203]."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The application of swarm principles to robots is called swarm robotics while swarm intelligence refers to the more general set of algorithms. Swarm prediction has been used in the context of forecasting problems. Similar approaches to those proposed for swarm robotics are considered for genetically modified organisms in synthetic collective intelligence."}, {"text": "The use of swarm intelligence in telecommunication networks has also been researched, in the form of ant-based routing. This was pioneered separately by Dorigo et al. and Hewlett Packard in the mid-1990s, with a number of variants existing."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "Human factors, 'human work capital' is probably one of the most important issues that deals with qualitative properties. Some common aspects are work, motivation, general participation, etc. Although all of these aspects are not measurable in terms of quantitative criteria, the general overview of them could be summarized as a quantitative property."}, {"text": "Evidence of a general factor of intelligence has been observed in non-human animals. The general factor of intelligence, or g factor, is a psychometric construct that summarizes the correlations observed between an individual's scores on a wide range of cognitive abilities. First described in humans, the g factor has since been identified in a number of non-human species.Cognitive ability and intelligence cannot be measured using the same, largely verbally dependent, scales developed for humans."}, {"text": "Anita Woolley presents Collective intelligence as a measure of group intelligence and group creativity. The idea is that a measure of collective intelligence covers a broad range of features of the group, mainly group composition and group interaction. The features of composition that lead to increased levels of collective intelligence in groups include criteria such as higher numbers of women in the group as well as increased diversity of the group.Atlee and P\u00f3r suggest that the field of collective intelligence should primarily be seen as a human enterprise in which mind-sets, a willingness to share and an openness to the value of distributed intelligence for the common good are paramount, though group theory and artificial intelligence have something to offer."}, {"text": "NASA is investigating the use of swarm technology for planetary mapping. A 1992 paper by M. Anthony Lewis and George A. Bekey discusses the possibility of using swarm intelligence to control nanobots within the body for the purpose of killing cancer tumors. Conversely al-Rifaie and Aber have used stochastic diffusion search to help locate tumours."}]}, {"question": "Is cluster sampling random or non random", "positive_ctxs": [{"text": "Simple random sampling: By using the random number generator technique, the researcher draws a sample from the population called simple random sampling. Simple random samplings are of two types. Cluster sampling: Cluster sampling occurs when a random sample is drawn from certain aggregational geographical groups."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "An example of cluster sampling is area sampling or geographical cluster sampling. Each cluster is a geographical area. Because a geographically dispersed population can be expensive to survey, greater economy than simple random sampling can be achieved by grouping several respondents within a local area into a cluster."}, {"text": "An example of cluster sampling is area sampling or geographical cluster sampling. Each cluster is a geographical area. Because a geographically dispersed population can be expensive to survey, greater economy than simple random sampling can be achieved by grouping several respondents within a local area into a cluster."}, {"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "A random sampling technique is then used on any relevant clusters to choose which clusters to include in the study. In single-stage cluster sampling, all the elements from each of the selected clusters are sampled. In two-stage cluster sampling, a random sampling technique is applied to the elements from each of the selected clusters."}, {"text": "A random sampling technique is then used on any relevant clusters to choose which clusters to include in the study. In single-stage cluster sampling, all the elements from each of the selected clusters are sampled. In two-stage cluster sampling, a random sampling technique is applied to the elements from each of the selected clusters."}, {"text": "The elements in each cluster are then sampled. If all elements in each sampled cluster are sampled, then this is referred to as a \"one-stage\" cluster sampling plan. If a simple random subsample of elements is selected within each of these groups, this is referred to as a \"two-stage\" cluster sampling plan."}]}, {"question": "What is bootstrap method in statistics", "positive_ctxs": [{"text": "The bootstrap method is a resampling technique used to estimate statistics on a population by sampling a dataset with replacement. It can be used to estimate summary statistics such as the mean or standard deviation. That when using the bootstrap you must choose the size of the sample and the number of repeats."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A bootstrap campaign should be distinguished from a genuine news story of genuine interest, such as a natural disaster that kills thousands, or the death of a respected public figure. It is legitimate for these stories to be given coverage across all media platforms. What distinguishes a bootstrap from a real story is the contrived and organized manner in which the bootstrap appears to come out of nowhere."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Although there are huge theoretical differences in their mathematical insights, the main practical difference for statistics users is that the bootstrap gives different results when repeated on the same data, whereas the jackknife gives exactly the same result each time. Because of this, the jackknife is popular when the estimates need to be verified several times before publishing (e.g., official statistics agencies). On the other hand, when this verification feature is not crucial and it is of interest not to have a number but just an idea of its distribution, the bootstrap is preferred (e.g., studies in physics, economics, biological sciences)."}]}, {"question": "What is PCA in neural network", "positive_ctxs": [{"text": "Principal components analysis (PCA) is a statistical technique that allows identifying underlying linear patterns in a data set so it can be expressed in terms of other data set of a significatively lower dimension without much loss of information."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The delta rule is derived by attempting to minimize the error in the output of the neural network through gradient descent. The error for a neural network with"}, {"text": "LeNet is a convolutional neural network structure proposed by Yann LeCun et al. In general, LeNet refers to lenet-5 and is a simple convolutional neural network. Convolutional neural networks are a kind of feed-forward neural network whose artificial neurons can respond to a part of the surrounding cells in the coverage range and perform well in large-scale image processing."}, {"text": "A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus a neural network is either a biological neural network, made up of real biological neurons, or an artificial neural network, for solving artificial intelligence (AI) problems. The connections of the biological neuron are modeled as weights."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks.Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:"}, {"text": "The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks.Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:"}, {"text": "The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks.Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:"}]}, {"question": "What is the use of ridge regression", "positive_ctxs": [{"text": "Ridge Regression is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates are unbiased, but their variances are large so they may be far from the true value."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Lasso can set coefficients to zero, while the superficially similar ridge regression cannot. This is due to the difference in the shape of their constraint boundaries. Both lasso and ridge regression can be interpreted as minimizing the same objective function"}, {"text": "One of the prime differences between Lasso and ridge regression is that in ridge regression, as the penalty is increased, all parameters are reduced while still remaining non-zero, while in Lasso, increasing the penalty will cause more and more of the parameters to be driven to zero. This is an advantage of Lasso over ridge regression, as driving parameters to zero deselects the features from the regression. Thus, Lasso automatically selects more relevant features and discards the others, whereas Ridge regression never fully discards any features."}, {"text": "One of the prime differences between Lasso and ridge regression is that in ridge regression, as the penalty is increased, all parameters are reduced while still remaining non-zero, while in Lasso, increasing the penalty will cause more and more of the parameters to be driven to zero. This is an advantage of Lasso over ridge regression, as driving parameters to zero deselects the features from the regression. Thus, Lasso automatically selects more relevant features and discards the others, whereas Ridge regression never fully discards any features."}, {"text": "Therefore, the lasso estimates share features of both ridge and best subset selection regression since they both shrink the magnitude of all the coefficients, like ridge regression and set some of them to zero, as in the best subset selection case. Additionally, while ridge regression scales all of the coefficients by a constant factor, lasso instead translates the coefficients towards zero by a constant value and sets them to zero if they reach it."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Compared to ordinary least squares, ridge regression is not unbiased. It accepts little bias to reduce variance and the mean square error, and helps to improve the prediction accuracy. Thus, ridge estimator yields more stable solutions by shrinking coefficients but suffers from the lack of sensitivity to the data."}, {"text": "Just as ridge regression can be interpreted as linear regression for which the coefficients have been assigned normal prior distributions, lasso can be interpreted as linear regression for which the coefficients have Laplace prior distributions. The Laplace distribution is sharply peaked at zero (its first derivative is discontinuous) and it concentrates its probability mass closer to zero than does the normal distribution. This provides an alternative explanation of why lasso tends to set some coefficients to zero, while ridge regression does not."}]}, {"question": "What are the two types of hypothesis testing", "positive_ctxs": [{"text": "A hypothesis is an approximate explanation that relates to the set of facts that can be tested by certain further investigations. There are basically two types, namely, null hypothesis and alternative hypothesis. A research generally starts with a problem."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "Statistical hypothesis testing is a key technique of both frequentist inference and Bayesian inference, although the two types of inference have notable differences. Statistical hypothesis tests define a procedure that controls (fixes) the probability of incorrectly deciding that a default position (null hypothesis) is incorrect. The procedure is based on how likely it would be for a set of observations to occur if the null hypothesis were true."}, {"text": "Statistical hypothesis testing is a key technique of both frequentist inference and Bayesian inference, although the two types of inference have notable differences. Statistical hypothesis tests define a procedure that controls (fixes) the probability of incorrectly deciding that a default position (null hypothesis) is incorrect. The procedure is based on how likely it would be for a set of observations to occur if the null hypothesis were true."}, {"text": "Statistical hypothesis testing is a key technique of both frequentist inference and Bayesian inference, although the two types of inference have notable differences. Statistical hypothesis tests define a procedure that controls (fixes) the probability of incorrectly deciding that a default position (null hypothesis) is incorrect. The procedure is based on how likely it would be for a set of observations to occur if the null hypothesis were true."}, {"text": "Statistical hypothesis testing is a key technique of both frequentist inference and Bayesian inference, although the two types of inference have notable differences. Statistical hypothesis tests define a procedure that controls (fixes) the probability of incorrectly deciding that a default position (null hypothesis) is incorrect. The procedure is based on how likely it would be for a set of observations to occur if the null hypothesis were true."}, {"text": "Statistical hypothesis testing is a key technique of both frequentist inference and Bayesian inference, although the two types of inference have notable differences. Statistical hypothesis tests define a procedure that controls (fixes) the probability of incorrectly deciding that a default position (null hypothesis) is incorrect. The procedure is based on how likely it would be for a set of observations to occur if the null hypothesis were true."}, {"text": "Statistical hypothesis testing is a key technique of both frequentist inference and Bayesian inference, although the two types of inference have notable differences. Statistical hypothesis tests define a procedure that controls (fixes) the probability of incorrectly deciding that a default position (null hypothesis) is incorrect. The procedure is based on how likely it would be for a set of observations to occur if the null hypothesis were true."}]}, {"question": "What are the disadvantages of non probability sampling", "positive_ctxs": [{"text": "One major disadvantage of non-probability sampling is that it's impossible to know how well you are representing the population. Plus, you can't calculate confidence intervals and margins of error."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Even though convenience sampling can be easy to obtain, its disadvantages usually outweigh the advantages. This sampling technique may be more appropriate for one type of study and less for another."}, {"text": "Some researchers have used search engines to construct sampling frames. This technique has disadvantages because search engine results are unsystematic and non-random making them unreliable for obtaining an unbiased sample. The sampling frame issue can be circumvented by using an entire population of interest, such as tweets by particular Twitter users or online archived content of certain newspapers as the sampling frame."}, {"text": "Nonprobability sampling is any sampling method where some elements of the population have no chance of selection (these are sometimes referred to as 'out of coverage'/'undercovered'), or where the probability of selection can't be accurately determined. It involves the selection of elements based on assumptions regarding the population of interest, which forms the criteria for selection. Hence, because the selection of elements is nonrandom, nonprobability sampling does not allow the estimation of sampling errors."}, {"text": "Nonprobability sampling is any sampling method where some elements of the population have no chance of selection (these are sometimes referred to as 'out of coverage'/'undercovered'), or where the probability of selection can't be accurately determined. It involves the selection of elements based on assumptions regarding the population of interest, which forms the criteria for selection. Hence, because the selection of elements is nonrandom, nonprobability sampling does not allow the estimation of sampling errors."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "Suppose there are n people at a party, each of whom brought an umbrella. At the end of the party everyone picks an umbrella out of the stack of umbrellas and leaves. What is the probability that no one left with his/her own umbrella?"}, {"text": "In the finding of a pathognomonic sign or symptom it is almost certain that the target condition is present, and in the absence of finding a sine qua non sign or symptom it is almost certain that the target condition is absent. In reality, however, the subjective probability of the presence of a condition is never exactly 100% or 0%, so tests are rather aimed at estimating a post-test probability of a condition or other entity."}]}, {"question": "How do you analyze univariate data", "positive_ctxs": [{"text": "Univariate analysis is the simplest form of analyzing data. \u201cUni\u201d means \u201cone\u201d, so in other words your data has only one variable. It doesn't deal with causes or relationships (unlike regression ) and it's major purpose is to describe; It takes data, summarizes that data and finds patterns in the data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Ultimately, biological neuron models aim to explain the mechanisms underlying the operation of the nervous system. Modeling helps to analyze experimental data and address questions such as: How are the spikes of a neuron related to sensory stimulation or motor activity such as arm movements? What is the neural code used by the nervous system?"}, {"text": "In statistics, a univariate distribution characterizes one variable, although it can be applied in other ways as well. For example, univariate data are composed of a single scalar component. In time series analysis, the whole time series is the \"variable\": a univariate time series is the series of values over time of a single quantity."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "What is the difference between random forest and gradient boosting", "positive_ctxs": [{"text": "Like random forests, gradient boosting is a set of decision trees. The two main differences are: How trees are built: random forests builds each tree independently while gradient boosting builds one tree at a time."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The randomForestSRC package includes an example survival random forest analysis using the data set pbc. This data is from the Mayo Clinic Primary Biliary Cirrhosis (PBC) trial of the liver conducted between 1974 and 1984. In the example, the random forest survival model gives more accurate predictions of survival than the Cox PH model."}, {"text": "The randomForestSRC package includes an example survival random forest analysis using the data set pbc. This data is from the Mayo Clinic Primary Biliary Cirrhosis (PBC) trial of the liver conducted between 1974 and 1984. In the example, the random forest survival model gives more accurate predictions of survival than the Cox PH model."}, {"text": "As part of their construction, random forest predictors naturally lead to a dissimilarity measure among the observations. One can also define a random forest dissimilarity measure between unlabeled data: the idea is to construct a random forest predictor that distinguishes the \u201cobserved\u201d data from suitably generated synthetic data."}, {"text": "An alternative to building a single survival tree is to build many survival trees, where each tree is constructed using a sample of the data, and average the trees to predict survival. This is the method underlying the survival random forest models. Survival random forest analysis is available in the R package \"randomForestSRC\"."}, {"text": "An alternative to building a single survival tree is to build many survival trees, where each tree is constructed using a sample of the data, and average the trees to predict survival. This is the method underlying the survival random forest models. Survival random forest analysis is available in the R package \"randomForestSRC\"."}, {"text": "is to fit a random forest to the data. During the fitting process the out-of-bag error for each data point is recorded and averaged over the forest (errors on an independent test set can be substituted if bagging is not used during training)."}, {"text": "The observed data are the original unlabeled data and the synthetic data are drawn from a reference distribution. A random forest dissimilarity can be attractive because it handles mixed variable types very well, is invariant to monotonic transformations of the input variables, and is robust to outlying observations. The random forest dissimilarity easily deals with a large number of semi-continuous variables due to its intrinsic variable selection; for example, the \"Addcl 1\" random forest dissimilarity weighs the contribution of each variable according to how dependent it is on other variables."}]}, {"question": "What are bias in machine learning", "positive_ctxs": [{"text": "Bias in Machine Learning is defined as the phenomena of observing results that are systematically prejudiced due to faulty assumptions. This also results in bias which arises from the choice of training and test data and their representation of the true population."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What constitutes narrow or wide limits of agreement or large or small bias is a matter of a practical assessment in each case."}, {"text": ", and thus introducing some bias to reduce variance. Furthermore, it is not uncommon in machine learning to have cases where"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased data can result in skewed or undesired predictions. Algorithmic bias is a potential result from data not fully prepared for training."}, {"text": "Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased data can result in skewed or undesired predictions. Algorithmic bias is a potential result from data not fully prepared for training."}, {"text": "Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased data can result in skewed or undesired predictions. Algorithmic bias is a potential result from data not fully prepared for training."}, {"text": "Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased data can result in skewed or undesired predictions. Algorithmic bias is a potential result from data not fully prepared for training."}]}, {"question": "Can you do multiple regression with categorical variables", "positive_ctxs": [{"text": "Categorical variables require special attention in regression analysis because, unlike dichotomous or continuous variables, they cannot by entered into the regression equation just as they are. Instead, they need to be recoded into a series of variables which can then be entered into the regression model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Of those that survive, at what rate will they die or fail? Can multiple causes of death or failure be taken into account? How do particular circumstances or characteristics increase or decrease the probability of survival?"}, {"text": "Of those that survive, at what rate will they die or fail? Can multiple causes of death or failure be taken into account? How do particular circumstances or characteristics increase or decrease the probability of survival?"}, {"text": "LDA is closely related to analysis of variance (ANOVA) and regression analysis, which also attempt to express one dependent variable as a linear combination of other features or measurements. However, ANOVA uses categorical independent variables and a continuous dependent variable, whereas discriminant analysis has continuous independent variables and a categorical dependent variable (i.e. Logistic regression and probit regression are more similar to LDA than ANOVA is, as they also explain a categorical variable by the values of continuous independent variables."}, {"text": "LDA is closely related to analysis of variance (ANOVA) and regression analysis, which also attempt to express one dependent variable as a linear combination of other features or measurements. However, ANOVA uses categorical independent variables and a continuous dependent variable, whereas discriminant analysis has continuous independent variables and a categorical dependent variable (i.e. Logistic regression and probit regression are more similar to LDA than ANOVA is, as they also explain a categorical variable by the values of continuous independent variables."}, {"text": "LDA is closely related to analysis of variance (ANOVA) and regression analysis, which also attempt to express one dependent variable as a linear combination of other features or measurements. However, ANOVA uses categorical independent variables and a continuous dependent variable, whereas discriminant analysis has continuous independent variables and a categorical dependent variable (i.e. Logistic regression and probit regression are more similar to LDA than ANOVA is, as they also explain a categorical variable by the values of continuous independent variables."}, {"text": "LDA is closely related to analysis of variance (ANOVA) and regression analysis, which also attempt to express one dependent variable as a linear combination of other features or measurements. However, ANOVA uses categorical independent variables and a continuous dependent variable, whereas discriminant analysis has continuous independent variables and a categorical dependent variable (i.e. Logistic regression and probit regression are more similar to LDA than ANOVA is, as they also explain a categorical variable by the values of continuous independent variables."}]}, {"question": "Is it important to determine the sample size", "positive_ctxs": [{"text": "The larger the sample size is the smaller the effect size that can be detected. The reverse is also true; small sample sizes can detect large effect sizes. Thus an appropriate determination of the sample size used in a study is a crucial step in the design of a study."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "In any report or article, the structure of the sample must be accurately described. It is especially important to exactly determine the structure of the sample (and specifically the size of the subgroups) when subgroup analyses will be performed during the main analysis phase."}, {"text": "Look at the candidates to drop and the components to be dropped. Is there anything that needs to be retained because it is critical to one's construct ? For example, if a conceptually important item only cross loads on a component to be dropped, it is good to keep it for the next round."}, {"text": "be the sample size collected from each group. The permutation test is designed to determine whether the observed difference between the sample means is large enough to reject, at some significance level, the null hypothesis H"}, {"text": "be the sample size collected from each group. The permutation test is designed to determine whether the observed difference between the sample means is large enough to reject, at some significance level, the null hypothesis H"}, {"text": "be the sample size collected from each group. The permutation test is designed to determine whether the observed difference between the sample means is large enough to reject, at some significance level, the null hypothesis H"}]}, {"question": "What is ReLU function in neural network", "positive_ctxs": [{"text": "The rectified linear activation function or ReLU for short is a piecewise linear function that will output the input directly if it is positive, otherwise, it will output zero. The rectified linear activation is the default activation when developing multilayer Perceptron and convolutional neural networks."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}]}, {"question": "What's the difference between false negative and false positive", "positive_ctxs": [{"text": "A false positive means that the results say you have the condition you were tested for, but you really don't. With a false negative, the results say you don't have a condition, but you really do."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, when performing multiple comparisons, a false positive ratio (also known as fall-out or false alarm ratio) is the probability of falsely rejecting the null hypothesis for a particular test. The false positive rate is calculated as the ratio between the number of negative events wrongly categorized as positive (false positives) and the total number of actual negative events (regardless of classification)."}, {"text": "In statistics, when performing multiple comparisons, a false positive ratio (also known as fall-out or false alarm ratio) is the probability of falsely rejecting the null hypothesis for a particular test. The false positive rate is calculated as the ratio between the number of negative events wrongly categorized as positive (false positives) and the total number of actual negative events (regardless of classification)."}, {"text": "A false positive is an error in binary classification in which a test result incorrectly indicates the presence of a condition such as a disease when the disease is not present, while a false negative is the opposite error where the test result incorrectly fails to indicate the presence of a condition when it is present. These are the two kinds of errors in a binary test, in contrast to the two kinds of correct result (a true positive and a true negative.) They are also known in medicine as a false positive (or false negative) diagnosis, and in statistical classification as a false positive (or false negative) error.In statistical hypothesis testing the analogous concepts are known as type I and type II errors, where a positive result corresponds to rejecting the null hypothesis, and a negative result corresponds to not rejecting the null hypothesis."}, {"text": "A false positive is an error in binary classification in which a test result incorrectly indicates the presence of a condition such as a disease when the disease is not present, while a false negative is the opposite error where the test result incorrectly fails to indicate the presence of a condition when it is present. These are the two kinds of errors in a binary test, in contrast to the two kinds of correct result (a true positive and a true negative.) They are also known in medicine as a false positive (or false negative) diagnosis, and in statistical classification as a false positive (or false negative) error.In statistical hypothesis testing the analogous concepts are known as type I and type II errors, where a positive result corresponds to rejecting the null hypothesis, and a negative result corresponds to not rejecting the null hypothesis."}, {"text": "A false positive is an error in binary classification in which a test result incorrectly indicates the presence of a condition such as a disease when the disease is not present, while a false negative is the opposite error where the test result incorrectly fails to indicate the presence of a condition when it is present. These are the two kinds of errors in a binary test, in contrast to the two kinds of correct result (a true positive and a true negative.) They are also known in medicine as a false positive (or false negative) diagnosis, and in statistical classification as a false positive (or false negative) error.In statistical hypothesis testing the analogous concepts are known as type I and type II errors, where a positive result corresponds to rejecting the null hypothesis, and a negative result corresponds to not rejecting the null hypothesis."}, {"text": "A false positive is an error in binary classification in which a test result incorrectly indicates the presence of a condition such as a disease when the disease is not present, while a false negative is the opposite error where the test result incorrectly fails to indicate the presence of a condition when it is present. These are the two kinds of errors in a binary test, in contrast to the two kinds of correct result (a true positive and a true negative.) They are also known in medicine as a false positive (or false negative) diagnosis, and in statistical classification as a false positive (or false negative) error.In statistical hypothesis testing the analogous concepts are known as type I and type II errors, where a positive result corresponds to rejecting the null hypothesis, and a negative result corresponds to not rejecting the null hypothesis."}, {"text": "In terms of false positives and false negatives, a positive result corresponds to rejecting the null hypothesis, while a negative result corresponds to failing to reject the null hypothesis; \"false\" means the conclusion drawn is incorrect. Thus, a type I error is equivalent to a false positive, and a type II error is equivalent to a false negative."}]}, {"question": "What is the expectation of a random variable", "positive_ctxs": [{"text": "The mean, expected value, or expectation of a random variable X is writ- ten as E(X) or \u00b5X. If we observe N random values of X, then the mean of the N values will be approximately equal to E(X) for large N. The expectation is defined differently for continuous and discrete random variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "which is a random variable. Note that the expectation of this random variable is equal to the probability of A itself:"}, {"text": "Intuitively, the expectation of a random variable taking values in a countable set of outcomes is defined analogously as the weighted sum of the outcome values, where the weights correspond to the probabilities of realizing that value. However, convergence issues associated with the infinite sum necessitate a more careful definition. A rigorous definition first defines expectation of a non-negative random variable, and then adapts it to general random variables."}, {"text": "Intuitively, the expectation of a random variable taking values in a countable set of outcomes is defined analogously as the weighted sum of the outcome values, where the weights correspond to the probabilities of realizing that value. However, convergence issues associated with the infinite sum necessitate a more careful definition. A rigorous definition first defines expectation of a non-negative random variable, and then adapts it to general random variables."}, {"text": "A particular case of differential games are the games with a random time horizon. In such games, the terminal time is a random variable with a given probability distribution function. Therefore, the players maximize the mathematical expectation of the cost function."}, {"text": "(In this example it appears to be a linear function, but in general it is nonlinear.) One may also treat the conditional expectation as a random variable, \u2014 a function of the random variable X, namely,"}, {"text": "for \u22121 < x < 1. One may also treat the conditional expectation as a random variable, \u2014 a function of the random variable X, namely,"}, {"text": "The expectation of a random variable plays an important role in a variety of contexts. For example, in decision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of their utility function."}]}, {"question": "What does Optimizer mean", "positive_ctxs": [{"text": "Noun. optimizer (plural optimizers) A person in a large business whose task is to maximize profits and make the business more efficient. (computing) A program that uses linear programming to optimize a process. (computing) A compiler or assembler that produces optimized code."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "But what does \"twice as likely\" mean in terms of a probability? It cannot literally mean to double the probability value (e.g. 50% becomes 100%, 75% becomes 150%, etc.)."}]}, {"question": "Is it possible for two things to have a causal relationship but not be correlated", "positive_ctxs": [{"text": "It is well known that correlation does not prove causation. What is less well known is that causation can exist when correlation is zero. The upshot of these two facts is that, in general and without additional information, correlation reveals literally nothing about causation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A causal inference draws a conclusion about a causal connection based on the conditions of the occurrence of an effect. Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship."}, {"text": "Importantly, regressions by themselves only reveal relationships between a dependent variable and a collection of independent variables in a fixed dataset. To use regressions for prediction or to infer causal relationships, respectively, a researcher must carefully justify why existing relationships have predictive power for a new context or why a relationship between two variables has a causal interpretation. The latter is especially important when researchers hope to estimate causal relationships using observational data."}, {"text": "Importantly, regressions by themselves only reveal relationships between a dependent variable and a collection of independent variables in a fixed dataset. To use regressions for prediction or to infer causal relationships, respectively, a researcher must carefully justify why existing relationships have predictive power for a new context or why a relationship between two variables has a causal interpretation. The latter is especially important when researchers hope to estimate causal relationships using observational data."}, {"text": "Importantly, regressions by themselves only reveal relationships between a dependent variable and a collection of independent variables in a fixed dataset. To use regressions for prediction or to infer causal relationships, respectively, a researcher must carefully justify why existing relationships have predictive power for a new context or why a relationship between two variables has a causal interpretation. The latter is especially important when researchers hope to estimate causal relationships using observational data."}, {"text": "Importantly, regressions by themselves only reveal relationships between a dependent variable and a collection of independent variables in a fixed dataset. To use regressions for prediction or to infer causal relationships, respectively, a researcher must carefully justify why existing relationships have predictive power for a new context or why a relationship between two variables has a causal interpretation. The latter is especially important when researchers hope to estimate causal relationships using observational data."}, {"text": "Traditionally, B was considered to be a confounder, because it is associated with X and with Y but is not on a causal path nor is it a descendant of anything on a causal path. Controlling for B causes it to become a confounder. This is known as M-bias."}, {"text": "However, in general, the presence of a correlation is not sufficient to infer the presence of a causal relationship (i.e., correlation does not imply causation)."}]}, {"question": "What is a cost function in machine learning", "positive_ctxs": [{"text": "Cost Function It is a function that measures the performance of a Machine Learning model for given data. Cost Function quantifies the error between predicted values and expected values and presents it in the form of a single real number. Depending on the problem Cost Function can be formed in many different ways."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is often the basis of introductory coverage of boosting in university machine learning courses. There are many more recent algorithms such as LPBoost, TotalBoost, BrownBoost, xgboost, MadaBoost, LogitBoost, and others. Many boosting algorithms fit into the AnyBoost framework, which shows that boosting performs gradient descent in a function space using a convex cost function."}, {"text": "It is often the basis of introductory coverage of boosting in university machine learning courses. There are many more recent algorithms such as LPBoost, TotalBoost, BrownBoost, xgboost, MadaBoost, LogitBoost, and others. Many boosting algorithms fit into the AnyBoost framework, which shows that boosting performs gradient descent in a function space using a convex cost function."}, {"text": "It is often the basis of introductory coverage of boosting in university machine learning courses. There are many more recent algorithms such as LPBoost, TotalBoost, BrownBoost, xgboost, MadaBoost, LogitBoost, and others. Many boosting algorithms fit into the AnyBoost framework, which shows that boosting performs gradient descent in a function space using a convex cost function."}, {"text": "It is often the basis of introductory coverage of boosting in university machine learning courses. There are many more recent algorithms such as LPBoost, TotalBoost, BrownBoost, xgboost, MadaBoost, LogitBoost, and others. Many boosting algorithms fit into the AnyBoost framework, which shows that boosting performs gradient descent in a function space using a convex cost function."}, {"text": "It is often the basis of introductory coverage of boosting in university machine learning courses. There are many more recent algorithms such as LPBoost, TotalBoost, BrownBoost, xgboost, MadaBoost, LogitBoost, and others. Many boosting algorithms fit into the AnyBoost framework, which shows that boosting performs gradient descent in a function space using a convex cost function."}, {"text": "It is often the basis of introductory coverage of boosting in university machine learning courses. There are many more recent algorithms such as LPBoost, TotalBoost, BrownBoost, xgboost, MadaBoost, LogitBoost, and others. Many boosting algorithms fit into the AnyBoost framework, which shows that boosting performs gradient descent in a function space using a convex cost function."}, {"text": "It is often the basis of introductory coverage of boosting in university machine learning courses. There are many more recent algorithms such as LPBoost, TotalBoost, BrownBoost, xgboost, MadaBoost, LogitBoost, and others. Many boosting algorithms fit into the AnyBoost framework, which shows that boosting performs gradient descent in a function space using a convex cost function."}]}, {"question": "What is the difference between content based filtering and collaborative filtering", "positive_ctxs": [{"text": "Content-based filtering, makes recommendations based on user preferences for product features. Collaborative filtering mimics user-to-user recommendations. It predicts users preferences as a linear, weighted combination of other user preferences. Both methods have limitations."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The collaborative filtering bandits (i.e., COFIBA) was introduced by Li and Karatzoglou and Gentile (SIGIR 2016), where the classical collaborative filtering, and content-based filtering methods try to learn a static recommendation model given training data. These approaches are far from ideal in highly dynamic recommendation domains such as news recommendation and computational advertisement, where the set of items and users is very fluid. In this work, they investigate an adaptive clustering technique for content recommendation based on exploration-exploitation strategies in contextual multi-armed bandit settings."}, {"text": "The collaborative filtering bandits (i.e., COFIBA) was introduced by Li and Karatzoglou and Gentile (SIGIR 2016), where the classical collaborative filtering, and content-based filtering methods try to learn a static recommendation model given training data. These approaches are far from ideal in highly dynamic recommendation domains such as news recommendation and computational advertisement, where the set of items and users is very fluid. In this work, they investigate an adaptive clustering technique for content recommendation based on exploration-exploitation strategies in contextual multi-armed bandit settings."}, {"text": "Sparsity: The number of items sold on major e-commerce sites is extremely large. The most active users will only have rated a small subset of the overall database. Thus, even the most popular items have very few ratings.One of the most famous examples of collaborative filtering is item-to-item collaborative filtering (people who buy x also buy y), an algorithm popularized by Amazon.com's recommender system.Many social networks originally used collaborative filtering to recommend new friends, groups, and other social connections by examining the network of connections between a user and their friends."}, {"text": "Unlike the traditional model of mainstream media, in which there are few editors who set guidelines, collaboratively filtered social media can have a very large number of editors, and content improves as the number of participants increases. Services like Reddit, YouTube, and Last.fm are typical examples of collaborative filtering based media.One scenario of collaborative filtering application is to recommend interesting or popular information as judged by the community. As a typical example, stories appear in the front page of Reddit as they are \"voted up\" (rated positively) by the community."}, {"text": "Collaborative filtering (CF) is a technique used by recommender systems. Collaborative filtering has two senses, a narrow one and a more general one.In the newer, narrower sense, collaborative filtering is a method of making automatic predictions (filtering) about the interests of a user by collecting preferences or taste information from many users (collaborating). The underlying assumption of the collaborative filtering approach is that if a person A has the same opinion as a person B on an issue, A is more likely to have B's opinion on a different issue than that of a randomly chosen person."}, {"text": "In the more general sense, collaborative filtering is the process of filtering for information or patterns using techniques involving collaboration among multiple agents, viewpoints, data sources, etc. Applications of collaborative filtering typically involve very large data sets. Collaborative filtering methods have been applied to many different kinds of data including: sensing and monitoring data, such as in mineral exploration, environmental sensing over large areas or multiple sensors; financial data, such as financial service institutions that integrate many financial sources; or in electronic commerce and web applications where the focus is on user data, etc."}, {"text": "By locating peer users/items with a rating history similar to the current user or item, they generate recommendations using this neighborhood. Collaborative filtering methods are classified as memory-based and model-based. A well-known example of memory-based approaches is the user-based algorithm, while that of model-based approaches is the Kernel-Mapping Recommender.A key advantage of the collaborative filtering approach is that it does not rely on machine analyzable content and therefore it is capable of accurately recommending complex items such as movies without requiring an \"understanding\" of the item itself."}]}, {"question": "Is there a good library for concept drift detection algorithms", "positive_ctxs": [{"text": "Yes, there are. One example is the WEKA MOA framework [1]. This framework implements standard algorithms in the literature of concept drift detection. The nice thing about this framework is that it allows users to generate new data streams which contains concept drifts of different types."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In one-class classification, the flow of data is not important. Unseen data is classified as typical or outlier depending on its characteristics, whether it is from the initial concept or not. However, unsupervised drift detection monitors the flow of data, and signals a drift if there is a significant amount of change or anomalies."}, {"text": "Unsupervised concept drift detection can be identified as the continuous form of one-class classification. One-class classifiers are used for detecting concept drifts."}, {"text": "MOA (Massive Online Analysis): free open-source software specific for mining data streams with concept drift. It has several machine learning algorithms (classification, regression, clustering, outlier detection and recommender systems). Also, it contains a prequential evaluation method, the EDDM concept drift methods, a reader of ARFF real datasets, and artificial stream generators as SEA concepts, STAGGER, rotating hyperplane, random tree, and random radius based functions."}, {"text": "ADWIN Bagging-based methods: Online Bagging methods for MLSC are sometimes combined with explicit concept drift detection mechanisms such as ADWIN (Adaptive Window). ADWIN keeps a variable-sized window to detect changes in the distribution of the data, and improves the ensemble by resetting the components that perform poorly when there is a drift in the incoming data. Generally, the letter 'a' is used as a subscript in the name of such ensembles to indicate the usage of ADWIN change detector."}, {"text": "ADWIN Bagging-based methods: Online Bagging methods for MLSC are sometimes combined with explicit concept drift detection mechanisms such as ADWIN (Adaptive Window). ADWIN keeps a variable-sized window to detect changes in the distribution of the data, and improves the ensemble by resetting the components that perform poorly when there is a drift in the incoming data. Generally, the letter 'a' is used as a subscript in the name of such ensembles to indicate the usage of ADWIN change detector."}, {"text": "Look at the candidates to drop and the components to be dropped. Is there anything that needs to be retained because it is critical to one's construct ? For example, if a conceptually important item only cross loads on a component to be dropped, it is good to keep it for the next round."}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}]}, {"question": "Why is random forest called random", "positive_ctxs": [{"text": "The random forest is a model made up of many decision trees. Rather than just simply averaging the prediction of trees (which we could call a \u201cforest\u201d), this model uses two key concepts that gives it the name random: Random sampling of training data points when building trees."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "As part of their construction, random forest predictors naturally lead to a dissimilarity measure among the observations. One can also define a random forest dissimilarity measure between unlabeled data: the idea is to construct a random forest predictor that distinguishes the \u201cobserved\u201d data from suitably generated synthetic data."}, {"text": "The new sample is tested in the random forest created by each bootstrapped dataset and each tree produces a classifier value for the new sample. For Classification, a process called voting is used to determine the final result, where the result produced the most frequently by the random forest is the given result for the sample. For Regression, the sample is assigned the average classifier value produced by the trees."}, {"text": "The new sample is tested in the random forest created by each bootstrapped dataset and each tree produces a classifier value for the new sample. For Classification, a process called voting is used to determine the final result, where the result produced the most frequently by the random forest is the given result for the sample. For Regression, the sample is assigned the average classifier value produced by the trees."}, {"text": "An alternative to building a single survival tree is to build many survival trees, where each tree is constructed using a sample of the data, and average the trees to predict survival. This is the method underlying the survival random forest models. Survival random forest analysis is available in the R package \"randomForestSRC\"."}, {"text": "An alternative to building a single survival tree is to build many survival trees, where each tree is constructed using a sample of the data, and average the trees to predict survival. This is the method underlying the survival random forest models. Survival random forest analysis is available in the R package \"randomForestSRC\"."}, {"text": "The idea of random subspace selection from Ho was also influential in the design of random forests. In this method a forest of trees is grown,"}, {"text": "They are often relatively inaccurate. Many other predictors perform better with similar data. This can be remedied by replacing a single decision tree with a random forest of decision trees, but a random forest is not as easy to interpret as a single decision tree."}]}, {"question": "What is logistic regression simple explanation", "positive_ctxs": [{"text": "Logistic Regression, also known as Logit Regression or Logit Model, is a mathematical model used in statistics to estimate (guess) the probability of an event occurring having been given some previous data. Logistic Regression works with binary data, where either the event happens (1) or the event does not happen (0)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "An explanation of logistic regression can begin with an explanation of the standard logistic function. The logistic function is a sigmoid function, which takes any real input"}, {"text": "An explanation of logistic regression can begin with an explanation of the standard logistic function. The logistic function is a sigmoid function, which takes any real input"}, {"text": "An explanation of logistic regression can begin with an explanation of the standard logistic function. The logistic function is a sigmoid function, which takes any real input"}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Logistic regression is a statistical model that in its basic form uses a logistic function to model a binary dependent variable, although many more complex extensions exist. In regression analysis, logistic regression (or logit regression) is estimating the parameters of a logistic model (a form of binary regression). Mathematically, a binary logistic model has a dependent variable with two possible values, such as pass/fail which is represented by an indicator variable, where the two values are labeled \"0\" and \"1\"."}]}, {"question": "What does it mean when it says increase or decrease in entropy", "positive_ctxs": [{"text": "Explanation: Entropy (S) by the modern definition is the amount of energy dispersal in a system. Therefore, the system entropy will increase when the amount of motion within the system increases. For example, the entropy increases when ice (solid) melts to give water (liquid)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This is the most analogous index to the squared multiple correlations in linear regression. It represents the proportional reduction in the deviance wherein the deviance is treated as a measure of variation analogous but not identical to the variance in linear regression analysis. One limitation of the likelihood ratio R\u00b2 is that it is not monotonically related to the odds ratio, meaning that it does not necessarily increase as the odds ratio increases and does not necessarily decrease as the odds ratio decreases."}, {"text": "This is the most analogous index to the squared multiple correlations in linear regression. It represents the proportional reduction in the deviance wherein the deviance is treated as a measure of variation analogous but not identical to the variance in linear regression analysis. One limitation of the likelihood ratio R\u00b2 is that it is not monotonically related to the odds ratio, meaning that it does not necessarily increase as the odds ratio increases and does not necessarily decrease as the odds ratio decreases."}, {"text": "This is the most analogous index to the squared multiple correlations in linear regression. It represents the proportional reduction in the deviance wherein the deviance is treated as a measure of variation analogous but not identical to the variance in linear regression analysis. One limitation of the likelihood ratio R\u00b2 is that it is not monotonically related to the odds ratio, meaning that it does not necessarily increase as the odds ratio increases and does not necessarily decrease as the odds ratio decreases."}, {"text": "Compensatory fuzzy logic (CFL) is a branch of fuzzy logic with modified rules for conjunction and disjunction. When the truth value of one component of a conjunction or disjunction is increased or decreased, the other component is decreased or increased to compensate. This increase or decrease in truth value may be offset by the increase or decrease in another component."}, {"text": "The second law of thermodynamics requires that, in general, the total entropy of any system does not decrease other than by increasing the entropy of some other system. Hence, in a system isolated from its environment, the entropy of that system tends not to decrease. It follows that heat cannot flow from a colder body to a hotter body without the application of work to the colder body."}, {"text": "The second law of thermodynamics states that a closed system has entropy that may increase or otherwise remain constant. Chemical reactions cause changes in entropy and entropy plays an important role in determining in which direction a chemical reaction spontaneously proceeds."}, {"text": "Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox). Landauer's principle imposes a lower bound on the amount of heat a computer must generate to process a given amount of information, though modern computers are far less efficient."}]}, {"question": "What is the main difference between the binomial distribution and the Poisson distribution", "positive_ctxs": [{"text": "The main difference between Binomial and Poisson Distribution is that the Binomial distribution is only for a certain frame or a probability of success and the Poisson distribution is used for events that could occur a very large number of times."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In other words, the alternatively parameterized negative binomial distribution converges to the Poisson distribution and r controls the deviation from the Poisson. This makes the negative binomial distribution suitable as a robust alternative to the Poisson, which approaches the Poisson for large r, but which has larger variance than the Poisson for small r."}, {"text": "In other words, the alternatively parameterized negative binomial distribution converges to the Poisson distribution and r controls the deviation from the Poisson. This makes the negative binomial distribution suitable as a robust alternative to the Poisson, which approaches the Poisson for large r, but which has larger variance than the Poisson for small r."}, {"text": "The Poisson distribution can be derived as a limiting case to the binomial distribution as the number of trials goes to infinity and the expected number of successes remains fixed \u2014 see law of rare events below. Therefore, it can be used as an approximation of the binomial distribution if n is sufficiently large and p is sufficiently small. There is a rule of thumb stating that the Poisson distribution is a good approximation of the binomial distribution if n is at least 20 and p is smaller than or equal to 0.05, and an excellent approximation if n \u2265 100 and np \u2264 10."}, {"text": "The binomial distribution converges towards the Poisson distribution as the number of trials goes to infinity while the product np remains fixed or at least p tends to zero. Therefore, the Poisson distribution with parameter \u03bb = np can be used as an approximation to B(n, p) of the binomial distribution if n is sufficiently large and p is sufficiently small. According to two rules of thumb, this approximation is good if n \u2265 20 and p \u2264 0.05, or if n \u2265 100 and np \u2264 10.Concerning the accuracy of Poisson approximation, see Novak, ch."}, {"text": "The binomial distribution converges towards the Poisson distribution as the number of trials goes to infinity while the product np remains fixed or at least p tends to zero. Therefore, the Poisson distribution with parameter \u03bb = np can be used as an approximation to B(n, p) of the binomial distribution if n is sufficiently large and p is sufficiently small. According to two rules of thumb, this approximation is good if n \u2265 20 and p \u2264 0.05, or if n \u2265 100 and np \u2264 10.Concerning the accuracy of Poisson approximation, see Novak, ch."}, {"text": "Well-known discrete probability distributions used in statistical modeling include the Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, and the negative binomial distribution. Additionally, the discrete uniform distribution is commonly used in computer programs that make equal-probability random selections between a number of choices."}, {"text": "Well-known discrete probability distributions used in statistical modeling include the Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, and the negative binomial distribution. Additionally, the discrete uniform distribution is commonly used in computer programs that make equal-probability random selections between a number of choices."}]}, {"question": "What is the difference between quartile exc and quartile inc", "positive_ctxs": [{"text": "EXC functions both find a requested quartile of a supplied data set. The difference between these two functions is that QUARTILE. INC bases its calculation on a percentile range of 0 to 1 inclusive, whereas QUARTILE. EXC bases its calculation on a percentile range of 0 to 1 exclusive."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The IQR of a set of values is calculated as the difference between the upper and lower quartiles, Q3 and Q1. Each quartile is a median calculated as follows."}, {"text": "The IQR of a set of values is calculated as the difference between the upper and lower quartiles, Q3 and Q1. Each quartile is a median calculated as follows."}, {"text": "The third quartile value is the number that marks three quarters of the ordered set. In other words, there are exactly 75% of the elements that are less than the first quartile and 25% of the elements that are greater. The third quartile value can be easily determined by finding the \"middle\" number between the median and the maximum."}, {"text": "The first quartile value is the number that marks one quarter of the ordered set. In other words, there are exactly 25% of the elements that are less than the first quartile and exactly 75% of the elements that are greater. The first quartile value can easily be determined by finding the \"middle\" number between the minimum and the median."}, {"text": "The lower quartile value is the median of the lower half of the data. The upper quartile value is the median of the upper half of the data.This rule is employed by the TI-83 calculator boxplot and \"1-Var Stats\" functions."}, {"text": "The lower quartile value is the median of the lower half of the data. The upper quartile value is the median of the upper half of the data.This rule is employed by the TI-83 calculator boxplot and \"1-Var Stats\" functions."}, {"text": "So the first, second and third 4-quantiles (the \"quartiles\") of the dataset {3, 6, 7, 8, 8, 10, 13, 15, 16, 20} are {7, 9, 15}. If also required, the zeroth quartile is 3 and the fourth quartile is 20."}]}, {"question": "What are multi agents in artificial intelligence", "positive_ctxs": [{"text": "A multi-agent system (MAS or \"self-organized system\") is a computerized system composed of multiple interacting intelligent agents. Intelligence may include methodic, functional, procedural approaches, algorithmic search or reinforcement learning."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "As intelligent agents become more popular, there are increasing legal risks involved.Intelligent agents in artificial intelligence are closely related to agents in economics, and versions of the intelligent agent paradigm are studied in cognitive science, ethics, the philosophy of practical reason, as well as in many interdisciplinary socio-cognitive modeling and computer social simulations."}, {"text": "Artificial intelligence, cognitive modeling, and neural networks are information processing paradigms inspired by the way biological neural systems process data. Artificial intelligence and cognitive modeling try to simulate some properties of biological neural networks. In the artificial intelligence field, artificial neural networks have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents (in computer and video games) or autonomous robots."}, {"text": "Excalibur was a research project led by Alexander Nareyek featuring any-time planning agents for computer games. The architecture is based on structural constraint satisfaction, which is an advanced artificial intelligence technique."}, {"text": "Action selection is a way of characterizing the most basic problem of intelligent systems: what to do next. In artificial intelligence and computational cognitive science, \"the action selection problem\" is typically associated with intelligent agents and animats\u2014artificial systems that exhibit complex behaviour in an agent environment. The term is also sometimes used in ethology or animal behavior."}, {"text": "Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems.SI systems consist typically of a population of simple agents or boids interacting locally with one another and with their environment."}, {"text": "The term was coined by Eliezer Yudkowsky, who is best known for popularizing the idea, to discuss superintelligent artificial agents that reliably implement human values. Stuart J. Russell and Peter Norvig's leading artificial intelligence textbook, Artificial Intelligence: A Modern Approach, describes the idea:"}, {"text": "The term was coined by Eliezer Yudkowsky, who is best known for popularizing the idea, to discuss superintelligent artificial agents that reliably implement human values. Stuart J. Russell and Peter Norvig's leading artificial intelligence textbook, Artificial Intelligence: A Modern Approach, describes the idea:"}]}, {"question": "What is data augmentation in deep learning", "positive_ctxs": [{"text": "Data Augmentation encompasses a suite of techniques that enhance the size and quality of training datasets such that better Deep Learning models can be built using them."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Data augmentation in data analysis are techniques used to increase the amount of data by adding slightly modified copies of already existing data or newly created synthetic data from existing data. It acts as a regularizer and helps reduce overfitting when training a machine learning model. It is closely related to oversampling in data analysis."}, {"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}, {"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}, {"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}, {"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}, {"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}, {"text": "Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks."}]}, {"question": "Why do we use log loss in logistic regression", "positive_ctxs": [{"text": "Log loss is used when we have {0,1} response. This is usually because when we have {0,1} response, the best models give us values in terms of probabilities. In simple words, log loss measures the UNCERTAINTY of the probabilities of your model by comparing them to the true labels."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The logistic loss is sometimes called cross-entropy loss. It is also known as log loss (In this case, the binary label is often denoted by {-1,+1}).Remark: The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared error loss for Linear regression."}, {"text": "The logistic loss is sometimes called cross-entropy loss. It is also known as log loss (In this case, the binary label is often denoted by {-1,+1}).Remark: The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared error loss for Linear regression."}, {"text": "Logistic regression typically optimizes the log loss for all the observations on which it is trained, which is the same as optimizing the average cross-entropy in the sample. For example, suppose we have"}, {"text": "Logistic regression typically optimizes the log loss for all the observations on which it is trained, which is the same as optimizing the average cross-entropy in the sample. For example, suppose we have"}, {"text": "The softmax function is often used in the final layer of a neural network-based classifier. Such networks are commonly trained under a log loss (or cross-entropy) regime, giving a non-linear variant of multinomial logistic regression."}, {"text": "The softmax function is often used in the final layer of a neural network-based classifier. Such networks are commonly trained under a log loss (or cross-entropy) regime, giving a non-linear variant of multinomial logistic regression."}, {"text": "The logistic loss is convex and grows linearly for negative values which make it less sensitive to outliers. The logistic loss is used in the LogitBoost algorithm."}]}, {"question": "How do you work out Standardised scores", "positive_ctxs": [{"text": "As the formula shows, the standard score is simply the score, minus the mean score, divided by the standard deviation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "How much does the ball cost?\" many subjects incorrectly answer $0.10. An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do."}, {"text": "And you just have to have somebody close to the power cord. Right when you see it about to happen, you gotta yank that electricity out of the wall, man."}, {"text": "Again, not every set of Likert scaled items can be used for Rasch measurement. The data has to be thoroughly checked to fulfill the strict formal axioms of the model. However, the raw scores are the sufficient statistics for the Rasch measures, a deliberate choice by Georg Rasch, so, if you are prepared to accept the raw scores as valid, then you can also accept the Rasch measures as valid."}, {"text": "It is a common practice to use a one-tailed hypothesis by default. However, \"If you do not have a specific direction firmly in mind in advance, use a two-sided alternative. Moreover, some users of statistics argue that we should always work with the two-sided alternative."}, {"text": "If you make 6 wagers of 1, and win once and lose 5 times, you will be paid 6 and finish square. Wagering 1 at 1:1 (Evens) pays out 2 (1 + 1) and wagering 1 at 1:2 pays out 3 (1 + 2). These example may be displayed in many different forms:"}]}, {"question": "What is a tensor ML", "positive_ctxs": [{"text": "A tensor is a generalization of vectors and matrices and is easily understood as a multidimensional array. It is a term and set of techniques known in machine learning in the training and operation of deep learning models can be described in terms of tensors."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Thus, the TVP of a tensor to a P-dimensional vector consists of P projections from the tensor to a scalar. The projection from a tensor to a scalar is an elementary multilinear projection (EMP). In EMP, a tensor is projected to a point through N unit projection vectors."}, {"text": "Here w is called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor. An example of a tensor density is the current density of electromagnetism."}, {"text": "Here w is called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor. An example of a tensor density is the current density of electromagnetism."}, {"text": "will be a rank-1 tensor with probability zero, a rank-2 tensor with positive probability, and rank-3 with positive probability. On the other hand, a randomly sampled complex tensor of the same size will be a rank-1 tensor with probability zero, a rank-2 tensor with probability one, and a rank-3 tensor with probability zero. It is even known that the generic rank-3 real tensor in"}, {"text": "A tensor is a multilinear transformation that maps a set of vector spaces to another vector space. A data tensor is a collection of multivariate observations organized into a M-way array."}, {"text": "This projection is an extension of the higher-order singular value decomposition (HOSVD) to subspace learning. Hence, its origin is traced back to the Tucker decomposition in 1960s.A TVP is a direct projection of a high-dimensional tensor to a low-dimensional vector, which is also referred to as the rank-one projections. As TVP projects a tensor to a vector, it can be viewed as multiple projections from a tensor to a scalar."}, {"text": "The rank of a tensor depends on the field over which the tensor is decomposed. It is known that some real tensors may admit a complex decomposition whose rank is strictly less than the rank of a real decomposition of the same tensor. As an example, consider the following real tensor"}]}, {"question": "What do negative coefficients mean in regression", "positive_ctxs": [{"text": "A negative coefficient suggests that as the independent variable increases, the dependent variable tends to decrease. The coefficient value signifies how much the mean of the dependent variable changes given a one-unit shift in the independent variable while holding other variables in the model constant."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor."}, {"text": "After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor."}, {"text": "After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor."}, {"text": "This is equal to the formula given above. As a correlation coefficient, the Matthews correlation coefficient is the geometric mean of the regression coefficients of the problem and its dual. The component regression coefficients of the Matthews correlation coefficient are Markedness (\u0394p) and Youden's J statistic (Informedness or \u0394p')."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Essentially, it would assume that people in a given location have correlated incomes generated by a single set of regression coefficients, whereas people in another location have incomes generated by a different set of coefficients. Meanwhile, the coefficients themselves are assumed to be correlated and generated from a single set of hyperparameters. Additional levels are possible: For example, people might be grouped by cities, and the city-level regression coefficients grouped by state, and the state-level coefficients generated from a single hyper-hyperparameter."}]}, {"question": "What is multi class classification in machine learning", "positive_ctxs": [{"text": "In machine learning, multiclass or multinomial classification is the problem of classifying instances into one of three or more classes (classifying instances into one of two classes is called binary classification)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Content-based classification is classification in which the weight given to particular subjects in a document determines the class to which the document is assigned. It is, for example, a common rule for classification in libraries, that at least 20% of the content of a book should be about the class to which the book is assigned. In automatic classification it could be the number of times given words appears in a document."}, {"text": "Decision boundaries are not always clear cut. That is, the transition from one class in the feature space to another is not discontinuous, but gradual. This effect is common in fuzzy logic based classification algorithms, where membership in one class or another is ambiguous."}, {"text": "Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels."}, {"text": "Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels."}, {"text": "Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels."}, {"text": "Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels."}, {"text": "Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels."}]}, {"question": "What does bias mean", "positive_ctxs": [{"text": "Bias is a disproportionate weight in favor of or against an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individual, a group, or a belief. In science and engineering, a bias is a systematic error."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What constitutes narrow or wide limits of agreement or large or small bias is a matter of a practical assessment in each case."}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "If the mean is determined in some other way than from the same samples used to estimate the variance then this bias does not arise and the variance can safely be estimated as that of the samples about the (independently known) mean."}, {"text": "If the mean is determined in some other way than from the same samples used to estimate the variance then this bias does not arise and the variance can safely be estimated as that of the samples about the (independently known) mean."}, {"text": "If the mean is determined in some other way than from the same samples used to estimate the variance then this bias does not arise and the variance can safely be estimated as that of the samples about the (independently known) mean."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}]}, {"question": "How do you make a decision in tree machine learning", "positive_ctxs": [{"text": "Steps for Making decision treeGet list of rows (dataset) which are taken into consideration for making decision tree (recursively at each nodes).Calculate uncertanity of our dataset or Gini impurity or how much our data is mixed up etc.Generate list of all question which needs to be asked at that node.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "An alternating decision tree (ADTree) is a machine learning method for classification. It generalizes decision trees and has connections to boosting."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity.In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data (but the resulting classification tree can be an input for decision making)."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity.In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data (but the resulting classification tree can be an input for decision making)."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "A decision stump is a machine learning model consisting of a one-level decision tree. That is, it is a decision tree with one internal node (the root) which is immediately connected to the terminal nodes (its leaves). A decision stump makes a prediction based on the value of just a single input feature."}, {"text": "Automation of feature engineering is a research topic that dates back to at least the late 1990s and machine learning software that incorporates automated feature engineering has been commercially available since 2016. The academic literature on the topic can be roughly separated into two strings: First, Multi-relational decision tree learning (MRDTL), which uses a supervised algorithm that is similar to a decision tree. Second, more recent approaches, like Deep Feature Synthesis, which use simpler methods.Multi-relational decision tree learning (MRDTL) generates features in the form of SQL queries by successively adding new clauses to the queries."}, {"text": "Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels."}]}, {"question": "How are the parameters updates during the gradient descent process", "positive_ctxs": [{"text": "On each iteration, we update the parameters in the opposite direction of the gradient of the objective function J(w) w.r.t the parameters where the gradient gives the direction of the steepest ascent. The size of the step we take on each iteration to reach the local minimum is determined by the learning rate \u03b1."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}, {"text": "AdaGrad (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative."}]}, {"question": "Does increasing the number of feature variables of the dataset improve the accuracy of the training model", "positive_ctxs": [{"text": "In general, there is no universal rule of thumb indicating that the accuracy of a learner is directly proportional to the number of features used to train it."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The use of different model parameters and different corpus sizes can greatly affect the quality of a word2vec model. Accuracy can be improved in a number of ways, including the choice of model architecture (CBOW or Skip-Gram), increasing the training data set, increasing the number of vector dimensions, and increasing the window size of words considered by the algorithm. Each of these improvements comes with the cost of increased computational complexity and therefore increased model generation time.In models using large corpora and a high number of dimensions, the skip-gram model yields the highest overall accuracy, and consistently produces the highest accuracy on semantic relationships, as well as yielding the highest syntactic accuracy in most cases."}, {"text": "The use of different model parameters and different corpus sizes can greatly affect the quality of a word2vec model. Accuracy can be improved in a number of ways, including the choice of model architecture (CBOW or Skip-Gram), increasing the training data set, increasing the number of vector dimensions, and increasing the window size of words considered by the algorithm. Each of these improvements comes with the cost of increased computational complexity and therefore increased model generation time.In models using large corpora and a high number of dimensions, the skip-gram model yields the highest overall accuracy, and consistently produces the highest accuracy on semantic relationships, as well as yielding the highest syntactic accuracy in most cases."}, {"text": "The use of different model parameters and different corpus sizes can greatly affect the quality of a word2vec model. Accuracy can be improved in a number of ways, including the choice of model architecture (CBOW or Skip-Gram), increasing the training data set, increasing the number of vector dimensions, and increasing the window size of words considered by the algorithm. Each of these improvements comes with the cost of increased computational complexity and therefore increased model generation time.In models using large corpora and a high number of dimensions, the skip-gram model yields the highest overall accuracy, and consistently produces the highest accuracy on semantic relationships, as well as yielding the highest syntactic accuracy in most cases."}, {"text": "The use of different model parameters and different corpus sizes can greatly affect the quality of a word2vec model. Accuracy can be improved in a number of ways, including the choice of model architecture (CBOW or Skip-Gram), increasing the training data set, increasing the number of vector dimensions, and increasing the window size of words considered by the algorithm. Each of these improvements comes with the cost of increased computational complexity and therefore increased model generation time.In models using large corpora and a high number of dimensions, the skip-gram model yields the highest overall accuracy, and consistently produces the highest accuracy on semantic relationships, as well as yielding the highest syntactic accuracy in most cases."}, {"text": "The use of different model parameters and different corpus sizes can greatly affect the quality of a word2vec model. Accuracy can be improved in a number of ways, including the choice of model architecture (CBOW or Skip-Gram), increasing the training data set, increasing the number of vector dimensions, and increasing the window size of words considered by the algorithm. Each of these improvements comes with the cost of increased computational complexity and therefore increased model generation time.In models using large corpora and a high number of dimensions, the skip-gram model yields the highest overall accuracy, and consistently produces the highest accuracy on semantic relationships, as well as yielding the highest syntactic accuracy in most cases."}, {"text": "Successively, the fitted model is used to predict the responses for the observations in a second dataset called the validation dataset. The validation dataset provides an unbiased evaluation of a model fit on the training dataset while tuning the model's hyperparameters (e.g. the number of hidden units (layers and layer widths) in a neural network)."}, {"text": "Successively, the fitted model is used to predict the responses for the observations in a second dataset called the validation dataset. The validation dataset provides an unbiased evaluation of a model fit on the training dataset while tuning the model's hyperparameters (e.g. the number of hidden units (layers and layer widths) in a neural network)."}]}, {"question": "Where does the hidden Markov model is used", "positive_ctxs": [{"text": "Hidden Markov models are known for their applications to thermodynamics, statistical mechanics, physics, chemistry, economics, finance, signal processing, information theory, pattern recognition - such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Markov sources are commonly used in communication theory, as a model of a transmitter. Markov sources also occur in natural language processing, where they are used to represent hidden meaning in a text. Given the output of a Markov source, whose underlying Markov chain is unknown, the task of solving for the underlying chain is undertaken by the techniques of hidden Markov models, such as the Viterbi algorithm."}, {"text": "A hidden Markov model is a Markov chain for which the state is only partially observable. In other words, observations are related to the state of the system, but they are typically insufficient to precisely determine the state. Several well-known algorithms for hidden Markov models exist."}, {"text": "A very common extension is to connect the latent variables defining the mixture component identities into a Markov chain, instead of assuming that they are independent identically distributed random variables. The resulting model is termed a hidden Markov model and is one of the most common sequential hierarchical models. Numerous extensions of hidden Markov models have been developed; see the resulting article for more information."}, {"text": "A very common extension is to connect the latent variables defining the mixture component identities into a Markov chain, instead of assuming that they are independent identically distributed random variables. The resulting model is termed a hidden Markov model and is one of the most common sequential hierarchical models. Numerous extensions of hidden Markov models have been developed; see the resulting article for more information."}, {"text": "A very common extension is to connect the latent variables defining the mixture component identities into a Markov chain, instead of assuming that they are independent identically distributed random variables. The resulting model is termed a hidden Markov model and is one of the most common sequential hierarchical models. Numerous extensions of hidden Markov models have been developed; see the resulting article for more information."}, {"text": "A hidden semi-Markov model (HSMM) is a statistical model with the same structure as a hidden Markov model except that the unobservable process is semi-Markov rather than Markov. This means that the probability of there being a change in the hidden state depends on the amount of time that has elapsed since entry into the current state. This is in contrast to hidden Markov models where there is a constant probability of changing state given survival in the state up to that time.For instance Sanson & Thomson (2001) modelled daily rainfall using a hidden semi-Markov model."}, {"text": "In the standard type of hidden Markov model considered here, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a categorical distribution) or continuous (typically from a Gaussian distribution). The parameters of a hidden Markov model are of two types, transition probabilities and emission probabilities (also known as output probabilities). The transition probabilities control the way the hidden state at time t is chosen given the hidden state at time"}]}, {"question": "What is difference between face detection and face recognition", "positive_ctxs": [{"text": "Face detection is a broader term than face recognition. Face detection just means that a system is able to identify that there is a human face present in an image or video. Face recognition can confirm identity. It is therefore used to control access to sensitive areas."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Until the 1990s facial recognition systems were developed primarily by using photographic portraits of human faces. Research on face recognition to reliably locate a face in an image that contains other objects gained traction in the early 1990s with the principle component analysis (PCA). The PCA method of face detection is also known as Eigenface and was developed by Matthew Turk and Alex Pentland."}, {"text": "The software was \"robust enough to make identifications from less-than-perfect face views. It can also often see through such impediments to identification as mustaches, beards, changed hairstyles and glasses\u2014even sunglasses\".Real-time face detection in video footage became possible in 2001 with the Viola\u2013Jones object detection framework for faces. Paul Viola and Michael Jones combined their face detection method with the Haar-like feature approach to object recognition in digital images to launch AdaBoost, the first real-time frontal-view face detector."}, {"text": "First face detection is used to segment the face from the image background. In the second step the segmented face image is aligned to account for face pose, image size and photographic properties, such as illumination and grayscale. The purpose of the alignment process is to enable the accurate localization of facial features in the third step, the facial feature extraction."}, {"text": "In 2006, the performance of the latest face recognition algorithms was evaluated in the Face Recognition Grand Challenge (FRGC). High-resolution face images, 3-D face scans, and iris images were used in the tests. The results indicated that the new algorithms are 10 times more accurate than the face recognition algorithms of 2002 and 100 times more accurate than those of 1995."}, {"text": "One advantage of 3D face recognition is that it is not affected by changes in lighting like other techniques. It can also identify a face from a range of viewing angles, including a profile view. Three-dimensional data points from a face vastly improve the precision of face recognition."}, {"text": "Quality measures are very important in facial recognition systems as large degrees of variations are possible in face images. Factors such as illumination, expression, pose and noise during face capture can affect the performance of facial recognition systems. Among all biometric systems, facial recognition has the highest false acceptance and rejection rates, thus questions have been raised on the effectiveness of face recognition software in cases of railway and airport security."}, {"text": "Christoph von der Malsburg and his research team at the University of Bochum developed Elastic Bunch Graph Matching in the mid 1990s to extract a face out of an image using skin segmentation. By 1997 the face detection method developed by Malsburg outperformed most other facial detection systems on the market. The so-called \"Bochum system\" of face detection was sold commercially on the market as ZN-Face to operators of airports and other busy locations."}]}, {"question": "How small of an alpha value can you choose and still have sufficient evidence to reject the null hypothesis", "positive_ctxs": [{"text": "Significance level and p-value \u03b1 is the maximum probability of rejecting the null hypothesis when the null hypothesis is true. If \u03b1 = 1 we always reject the null, if \u03b1 = 0 we never reject the null hypothesis. If we choose to compare the p-value to \u03b1 = 0.01, we are insisting on a stronger evidence!"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "critical region), then we say the null hypothesis is rejected at the chosen level of significance. Rejection of the null hypothesis is a conclusion. This is like a \"guilty\" verdict in a criminal trial: the evidence is sufficient to reject innocence, thus proving guilt."}, {"text": "critical region), then we say the null hypothesis is rejected at the chosen level of significance. Rejection of the null hypothesis is a conclusion. This is like a \"guilty\" verdict in a criminal trial: the evidence is sufficient to reject innocence, thus proving guilt."}, {"text": "critical region), then we say the null hypothesis is rejected at the chosen level of significance. Rejection of the null hypothesis is a conclusion. This is like a \"guilty\" verdict in a criminal trial: the evidence is sufficient to reject innocence, thus proving guilt."}, {"text": "critical region), then we say the null hypothesis is rejected at the chosen level of significance. Rejection of the null hypothesis is a conclusion. This is like a \"guilty\" verdict in a criminal trial: the evidence is sufficient to reject innocence, thus proving guilt."}, {"text": "critical region), then we say the null hypothesis is rejected at the chosen level of significance. Rejection of the null hypothesis is a conclusion. This is like a \"guilty\" verdict in a criminal trial: the evidence is sufficient to reject innocence, thus proving guilt."}, {"text": "critical region), then we say the null hypothesis is rejected at the chosen level of significance. Rejection of the null hypothesis is a conclusion. This is like a \"guilty\" verdict in a criminal trial: the evidence is sufficient to reject innocence, thus proving guilt."}, {"text": "critical region), then we say the null hypothesis is rejected at the chosen level of significance. Rejection of the null hypothesis is a conclusion. This is like a \"guilty\" verdict in a criminal trial: the evidence is sufficient to reject innocence, thus proving guilt."}]}, {"question": "What is the appropriate test statistic", "positive_ctxs": [{"text": "You can use test statistics to determine whether to reject the null hypothesis. The test statistic compares your data with what is expected under the null hypothesis. The test statistic is used to calculate the p-value. A test statistic measures the degree of agreement between a sample of data and the null hypothesis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "If the variation of the test statistic is strongly non-normal, a Z-test should not be used.If estimates of nuisance parameters are plugged in as discussed above, it is important to use estimates appropriate for the way the data were sampled. In the special case of Z-tests for the one or two sample location problem, the usual sample standard deviation is only appropriate if the data were collected as an independent sample."}, {"text": "Conditional logistic regression is more general than the CMH test as it can handle continuous variable and perform multivariate analysis. When the CMH test can be applied, the CMH test statistic and the score test statistic of the conditional logistic regression are identical."}, {"text": "Conditional logistic regression is more general than the CMH test as it can handle continuous variable and perform multivariate analysis. When the CMH test can be applied, the CMH test statistic and the score test statistic of the conditional logistic regression are identical."}, {"text": "An important property of a test statistic is that its sampling distribution under the null hypothesis must be calculable, either exactly or approximately, which allows p-values to be calculated. A test statistic shares some of the same qualities of a descriptive statistic, and many statistics can be used as both test statistics and descriptive statistics. However, a test statistic is specifically intended for use in statistical testing, whereas the main quality of a descriptive statistic is that it is easily interpretable."}, {"text": "T is easier to calculate by hand than W and the test is equivalent to the two-sided test described above; however, the distribution of the statistic under"}, {"text": "The logrank statistic can be derived as the score test for the Cox proportional hazards model comparing two groups. It is therefore asymptotically equivalent to the likelihood ratio test statistic based from that model."}]}, {"question": "What is the operator norm of a matrix", "positive_ctxs": [{"text": "In mathematics, the operator norm is a means to measure the \"size\" of certain linear operators. Formally, it is a norm defined on the space of bounded linear operators between two given normed vector spaces."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The sum of the k largest singular values of M is a matrix norm, the Ky Fan k-norm of M.The first of the Ky Fan norms, the Ky Fan 1-norm, is the same as the operator norm of M as a linear operator with respect to the Euclidean norms of Km and Kn. In other words, the Ky Fan 1-norm is the operator norm induced by the standard \u21132 Euclidean inner product. For this reason, it is also called the operator 2-norm."}, {"text": "The sum of the k largest singular values of M is a matrix norm, the Ky Fan k-norm of M.The first of the Ky Fan norms, the Ky Fan 1-norm, is the same as the operator norm of M as a linear operator with respect to the Euclidean norms of Km and Kn. In other words, the Ky Fan 1-norm is the operator norm induced by the standard \u21132 Euclidean inner product. For this reason, it is also called the operator 2-norm."}, {"text": "Technically, it is a discrete differentiation operator, computing an approximation of the gradient of the image intensity function. At each point in the image, the result of the Sobel\u2013Feldman operator is either the corresponding gradient vector or the norm of this vector. The Sobel\u2013Feldman operator is based on convolving the image with a small, separable, and integer-valued filter in the horizontal and vertical directions and is therefore relatively inexpensive in terms of computations."}, {"text": "It is possible to use the SVD of a square matrix A to determine the orthogonal matrix O closest to A. The closeness of fit is measured by the Frobenius norm of O \u2212 A. The solution is the product UV*."}, {"text": "It is possible to use the SVD of a square matrix A to determine the orthogonal matrix O closest to A. The closeness of fit is measured by the Frobenius norm of O \u2212 A. The solution is the product UV*."}, {"text": "Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations."}, {"text": "Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations."}]}, {"question": "How do I start learning artificial intelligence", "positive_ctxs": [{"text": "How to Get Started with AIPick a topic you are interested in.Find a quick solution.Improve your simple solution.Share your solution.Repeat steps 1-4 for different problems.Complete a Kaggle competition.Use machine learning professionally."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:"}, {"text": "Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:"}, {"text": "Technologists... have warned that artificial intelligence could one day pose an existential security threat. Musk has called it \"the greatest risk we face as a civilization\". Think about it: Have you ever seen a movie where the machines start thinking for themselves that ends well?"}, {"text": "In a 2014 article in The Atlantic, James Hamblin noted that most people do not care one way or the other about artificial general intelligence, and characterized his own gut reaction to the topic as: \"Get out of here. I have a hundred thousand things I am concerned about at this exact moment. Do I seriously need to add to that a technological singularity?\""}, {"text": "Artificial intelligence and computer vision share other topics such as pattern recognition and learning techniques. Consequently, computer vision is sometimes seen as a part of the artificial intelligence field or the computer science field in general."}, {"text": "Artificial intelligence and computer vision share other topics such as pattern recognition and learning techniques. Consequently, computer vision is sometimes seen as a part of the artificial intelligence field or the computer science field in general."}]}, {"question": "What is the name for Facebook's ranking algorithm", "positive_ctxs": [{"text": "EdgeRank"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If we can find a mapping from data to real numbers, ranking the data can be solved by ranking the real numbers. This mapping is called utility function. For label ranking the mapping is a function"}, {"text": "In competition ranking, items that compare equal receive the same ranking number, and then a gap is left in the ranking numbers. The number of ranking numbers that are left out in this gap is one less than the number of items that compared equal. Equivalently, each item's ranking number is 1 plus the number of items ranked above it."}, {"text": "In dense ranking, items that compare equally receive the same ranking number, and the next item(s) receive the immediately following ranking number. Equivalently, each item's ranking number is 1 plus the number of items ranked above it that are distinct with respect to the ranking order."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}]}, {"question": "What is difference between probability and likelihood", "positive_ctxs": [{"text": "The distinction between probability and likelihood is fundamentally important: Probability attaches to possible results; likelihood attaches to hypotheses. Explaining this distinction is the purpose of this first column. Possible results are mutually exclusive and exhaustive."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "\u039bp is the absolute difference between pre- and posttest probability of conditions (such as diseases) that the test is expected to achieve. A major factor for such an absolute difference is the power of the test itself, such as can be described in terms of, for example, sensitivity and specificity or likelihood ratio. Another factor is the pre-test probability, with a lower pre-test probability resulting in a lower absolute difference, with the consequence that even very powerful tests achieve a low absolute difference for very unlikely conditions in an individual (such as rare diseases in the absenceower can make a great difference for highly suspected conditions."}, {"text": "Given a model, likelihood intervals can be compared to confidence intervals. If \u03b8 is a single real parameter, then under certain conditions, a 14.65% likelihood interval (about 1:7 likelihood) for \u03b8 will be the same as a 95% confidence interval (19/20 coverage probability). In a slightly different formulation suited to the use of log-likelihoods (see Wilks' theorem), the test statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately a chi-squared distribution with degrees-of-freedom (df) equal to the difference in df's between the two models (therefore, the e\u22122 likelihood interval is the same as the 0.954 confidence interval; assuming difference in df's to be 1)."}, {"text": "Given a model, likelihood intervals can be compared to confidence intervals. If \u03b8 is a single real parameter, then under certain conditions, a 14.65% likelihood interval (about 1:7 likelihood) for \u03b8 will be the same as a 95% confidence interval (19/20 coverage probability). In a slightly different formulation suited to the use of log-likelihoods (see Wilks' theorem), the test statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately a chi-squared distribution with degrees-of-freedom (df) equal to the difference in df's between the two models (therefore, the e\u22122 likelihood interval is the same as the 0.954 confidence interval; assuming difference in df's to be 1)."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The power of the test is the probability that the test will find a statistically significant difference between men and women, as a function of the size of the true difference between those two populations."}, {"text": "In psychophysical terms, the size difference between A and C is above the just noticeable difference ('jnd') while the size differences between A and B and B and C are below the jnd."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}]}, {"question": "Are based on the idea that subjects are randomly assigned to groups", "positive_ctxs": [{"text": "Random assignment is however a process of randomly assigning subjects to experimental or control groups. This is a standard practice in true experimental research to ensure that treatment groups are similar (equivalent) to each other and to the control group, prior to treatment administration."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Randomized, controlled, crossover experiments are especially important in health care. In a randomized clinical trial, the subjects are randomly assigned treatments. When such a trial is a repeated measures design, the subjects are randomly assigned to a sequence of treatments."}, {"text": "Well-designed experimental studies replace equality of individuals as in the previous example by equality of groups. The objective is to construct two groups that are similar except for the treatment that the groups receive. This is achieved by selecting subjects from a single population and randomly assigning them to two or more groups."}, {"text": "Well-designed experimental studies replace equality of individuals as in the previous example by equality of groups. The objective is to construct two groups that are similar except for the treatment that the groups receive. This is achieved by selecting subjects from a single population and randomly assigning them to two or more groups."}, {"text": "Well-designed experimental studies replace equality of individuals as in the previous example by equality of groups. The objective is to construct two groups that are similar except for the treatment that the groups receive. This is achieved by selecting subjects from a single population and randomly assigning them to two or more groups."}, {"text": "Randomized, controlled crossover experiments are especially important in health care. In a randomized clinical trial, the subjects are randomly assigned to different arms of the study which receive different treatments. When the trial has a repeated measures design, the same measures are collected multiple times for each subject."}, {"text": "False positive conclusions, often resulting from the pressure to publish or the author's own confirmation bias, are an inherent hazard in many fields. A good way to prevent biases potentially leading to false positives in the data collection phase is to use a double-blind design. When a double-blind design is used, participants are randomly assigned to experimental groups but the researcher is unaware of what participants belong to which group."}, {"text": "Commonly used initialization methods are Forgy and Random Partition. The Forgy method randomly chooses k observations from the dataset and uses these as the initial means. The Random Partition method first randomly assigns a cluster to each observation and then proceeds to the update step, thus computing the initial mean to be the centroid of the cluster's randomly assigned points."}]}, {"question": "What is a normal sample distribution", "positive_ctxs": [{"text": "A sampling distribution is a probability distribution of a statistic obtained from a larger number of samples drawn from a specific population. The sampling distribution of a given population is the distribution of frequencies of a range of different outcomes that could possibly occur for a statistic of a population."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Because the test statistic (such as t) is asymptotically normally distributed, provided the sample size is sufficiently large, the distribution used for hypothesis testing may be approximated by a normal distribution. Testing hypotheses using a normal distribution is well understood and relatively easy. The simplest chi-square distribution is the square of a standard normal distribution."}, {"text": "The subscript 1 indicates that this particular chi-square distribution is constructed from only 1 standard normal distribution. A chi-square distribution constructed by squaring a single standard normal distribution is said to have 1 degree of freedom. Thus, as the sample size for a hypothesis test increases, the distribution of the test statistic approaches a normal distribution."}, {"text": "An informal approach to testing normality is to compare a histogram of the sample data to a normal probability curve. The empirical distribution of the data (the histogram) should be bell-shaped and resemble the normal distribution. This might be difficult to see if the sample is small."}, {"text": "An informal approach to testing normality is to compare a histogram of the sample data to a normal probability curve. The empirical distribution of the data (the histogram) should be bell-shaped and resemble the normal distribution. This might be difficult to see if the sample is small."}, {"text": "(n is the sample size) since the underlying population is normal, although sampling distributions may also often be close to normal even when the population distribution is not (see central limit theorem). An alternative to the sample mean is the sample median. When calculated from the same population, it has a different sampling distribution to that of the mean and is generally not normal (but it may be close for large sample sizes)."}, {"text": "The primary reason for which the chi-square distribution is extensively used in hypothesis testing is its relationship to the normal distribution. Many hypothesis tests use a test statistic, such as the t-statistic in a t-test. For these hypothesis tests, as the sample size, n, increases, the sampling distribution of the test statistic approaches the normal distribution (central limit theorem)."}, {"text": ", denote a random sample from a distribution having the pdf f(x, \u03b8) for \u03b9 < \u03b8 < \u03b4. Let Y1 = u1(X1, X2, ..., Xn) be a statistic whose pdf is g1(y1; \u03b8). What we want to prove is that Y1 = u1(X1, X2, ..., Xn) is a sufficient statistic for \u03b8 if and only if, for some function H,"}]}, {"question": "How do I create a labeled dataset", "positive_ctxs": [{"text": "Well labeled dataset can be used to train a custom model.In the Data Labeling Service UI, you create a dataset and import items into it from the same page.Open the Data Labeling Service UI. Click the Create button in the title bar.On the Add a dataset page, enter a name and description for the dataset.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "ASSIGNMENT, IF/ELSE, FOR/WHILE or recursive programs, that are needed to make a language Turing Complete. It should be labeled as such: a way to create a single logical operator, not a way to create programs in general. Perhaps \u201cOperator Synthesis\u201d could be used."}, {"text": "The likelihood-ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e. on what probability of Type I error is considered tolerable (Type I errors consist of the rejection of a null hypothesis that is true)."}, {"text": "The original dataset contains several entries of samples from s1 to s5. Each sample has 5 features (Gene 1 to Gene 5). All samples are labeled as Yes or No for a classification problem."}, {"text": "The original dataset contains several entries of samples from s1 to s5. Each sample has 5 features (Gene 1 to Gene 5). All samples are labeled as Yes or No for a classification problem."}, {"text": "Some academics and legal technology startups are attempting to create algorithmic models to predict case outcomes. Part of this overall effort involves improved case assessment for litigation funding.In order to better evaluate the quality of case outcome prediction systems, a proposal has been made to create a standardised dataset that would allow comparisons between systems."}, {"text": "I answer that with a resounding, yes. As part of my evidence, I consider testimony from journalists themselves. ... [A] solid majority of journalists do allow their political ideology to influence their reporting."}]}, {"question": "What is logic in artificial intelligence", "positive_ctxs": [{"text": "Logic, as per the definition of the Oxford dictionary, is \"the reasoning conducted or assessed according to strict principles and validity\". In Artificial Intelligence also, it carries somewhat the same meaning. Logic can be defined as the proof or validation behind any reason provided."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Today, logic is extensively applied in the field of artificial intelligence, and this field provide a rich source of problems in formal and informal logic. Argumentation theory is one good example of how logic is being applied to artificial intelligence. The ACM Computing Classification System in particular regards:"}, {"text": "What is more there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.A simple example of modus ponens often used in introductory logic books is \"If you are human then you are mortal\". This can be represented in pseudocode as:"}, {"text": "Today, some academics claim that Aristotle's system is generally seen as having little more than historical value (though there is some current interest in extending term logics), regarded as made obsolete by the advent of propositional logic and the predicate calculus. Others use Aristotle in argumentation theory to help develop and critically question argumentation schemes that are used in artificial intelligence and legal arguments."}, {"text": "The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks."}, {"text": "The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks."}, {"text": "The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks."}, {"text": "Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes \"superintelligent\", then it could become difficult or impossible for humans to control."}]}, {"question": "What is the difference between standard deviation and quartile deviation", "positive_ctxs": [{"text": "Quartile deviation is the difference between \u201cfirst and third quartiles\u201d in any distribution. Standard deviation measures the \u201cdispersion of the data set\u201d that is relative to its mean."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The IQR, mean, and standard deviation of a population P can be used in a simple test of whether or not P is normally distributed, or Gaussian. If P is normally distributed, then the standard score of the first quartile, z1, is \u22120.67, and the standard score of the third quartile, z3, is +0.67. Given mean = X and standard deviation = \u03c3 for P, if P is normally distributed, the first quartile"}, {"text": "The IQR, mean, and standard deviation of a population P can be used in a simple test of whether or not P is normally distributed, or Gaussian. If P is normally distributed, then the standard score of the first quartile, z1, is \u22120.67, and the standard score of the third quartile, z3, is +0.67. Given mean = X and standard deviation = \u03c3 for P, if P is normally distributed, the first quartile"}, {"text": "In inter-laboratory experiments, a concentration or other quantity of a chemical substance is measured repeatedly in different laboratories to assess the variability of the measurements. Then, the standard deviation of the difference between two values obtained within the same laboratory is called repeatability. The standard deviation for the difference between two measurement from different laboratories is called reproducibility."}, {"text": "The 2-norm and \u221e-norm are strictly convex, and thus (by convex optimization) the minimizer is unique (if it exists), and exists for bounded distributions. Thus standard deviation about the mean is lower than standard deviation about any other point, and the maximum deviation about the midrange is lower than the maximum deviation about any other point."}, {"text": "The 2-norm and \u221e-norm are strictly convex, and thus (by convex optimization) the minimizer is unique (if it exists), and exists for bounded distributions. Thus standard deviation about the mean is lower than standard deviation about any other point, and the maximum deviation about the midrange is lower than the maximum deviation about any other point."}, {"text": "The 2-norm and \u221e-norm are strictly convex, and thus (by convex optimization) the minimizer is unique (if it exists), and exists for bounded distributions. Thus standard deviation about the mean is lower than standard deviation about any other point, and the maximum deviation about the midrange is lower than the maximum deviation about any other point."}, {"text": "-th feature is computed by averaging the difference in out-of-bag error before and after the permutation over all trees. The score is normalized by the standard deviation of these differences."}]}, {"question": "How do you find a variance of a function", "positive_ctxs": [{"text": "Variance: Var(X) To calculate the Variance: square each value and multiply by its probability. sum them up and we get \u03a3x2p. then subtract the square of the Expected Value \u03bc"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "It's trivial to have a small variance \u2212 an \"estimator\" that is constant has a variance of zero. But from the above equation we find that the mean squared error of a biased estimator is bounded by"}, {"text": "Given a set of data that contains information on medical patients your goal is to find correlation for a disease. Before you can start iterating through the data ensure that you have an understanding of the result, are you looking for patients who have the disease? Are there other diseases that can be the cause?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}]}, {"question": "What is deep Boltzmann machine", "positive_ctxs": [{"text": "A deep Boltzmann machine (DBM) is a type of binary pairwise Markov random field (undirected probabilistic graphical model) with multiple layers of hidden random variables. It is a network of symmetrically coupled stochastic binary units. It comprises a set of visible units and layers of hidden units ."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Multimodal deep Boltzmann machines is successfully used in classification and missing data retrieval. The classification accuracy of multimodal deep Boltzmann machine outperforms support vector machines, latent Dirichlet allocation and deep belief network, when models are tested on data with both image-text modalities or with single modality. Multimodal deep Boltzmann machine is also able to predict the missing modality given the observed ones with reasonably good precision."}, {"text": "A deep Boltzmann machine has a sequence of layers of hidden units.There are only connections between adjacent hidden layers, as well as between visible units and hidden units in the first hidden layer. The energy function of the system adds layer interaction terms to the energy function of general restricted Boltzmann machine and is defined by"}, {"text": "The multimodal learning model is also capable to fill missing modality given the observed ones. The multimodal learning model combines two deep Boltzmann machines each corresponds to one modality. An additional hidden layer is placed on top of the two Boltzmann Machines to give the joint representation."}, {"text": "A Boltzmann machine is a type of stochastic neural network invented by Geoffrey Hinton and Terry Sejnowski in 1985. Boltzmann machines can be seen as the stochastic, generative counterpart of Hopfield nets. They are named after the Boltzmann distribution in statistical mechanics."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "A more efficient architecture is called restricted Boltzmann machine where connection is only allowed between hidden unit and visible unit, which is described in the next section."}, {"text": "Geoffrey Hinton developed a technique for training many-layered deep autoencoders. His method involves treating each neighbouring set of two layers as a restricted Boltzmann machine so that pretraining approximates a good solution, then using backpropagation to fine-tune the results. This model takes the name of deep belief network."}]}, {"question": "What is computational intelligence and how is it related to AI", "positive_ctxs": [{"text": "According to Bezdek (1994), Computational Intelligence is a subset of Artificial Intelligence. There are two types of machine intelligence: the artificial one based on hard computing techniques and the computational one based on soft computing methods, which enable adaptation to many situations."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained."}, {"text": "The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks."}, {"text": "The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks."}, {"text": "The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation is considered necessary to both encourage AI and manage associated risks."}, {"text": "An artificial intelligence system can (only) act like it thinks and has a mind.The first one is called \"the strong AI hypothesis\" and the second is \"the weak AI hypothesis\" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the \"strong AI hypothesis\" as \"strong AI\". This usage is also common in academic AI research and textbooks.The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible."}, {"text": "An artificial intelligence system can (only) act like it thinks and has a mind.The first one is called \"the strong AI hypothesis\" and the second is \"the weak AI hypothesis\" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the \"strong AI hypothesis\" as \"strong AI\". This usage is also common in academic AI research and textbooks.The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible."}]}, {"question": "How is predictive analytics done", "positive_ctxs": [{"text": "Definition. Predictive analytics is an area of statistics that deals with extracting information from data and using it to predict trends and behavior patterns. Predictive analytics statistical techniques include data modeling, machine learning, AI, deep learning algorithms and data mining."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The emergence of Big Data in the late 2000s led to a heightened interest in the applications of unstructured data analytics in contemporary fields such as predictive analytics and root cause analysis."}, {"text": "k-nearest neighbor search identifies the top k nearest neighbors to the query. This technique is commonly used in predictive analytics to estimate or classify a point based on the consensus of its neighbors. k-nearest neighbor graphs are graphs in which every point is connected to its k nearest neighbors."}, {"text": "Differentiating the fields of educational data mining (EDM) and learning analytics (LA) has been a concern of several researchers. George Siemens takes the position that educational data mining encompasses both learning analytics and academic analytics, the former of which is aimed at governments, funding agencies, and administrators instead of learners and faculty. Baepler and Murdoch define academic analytics as an area that \"...combines select institutional data, statistical analysis, and predictive modeling to create intelligence upon which learners, instructors, or administrators can change academic behavior\"."}, {"text": "In predictive analytics and machine learning, the concept drift means that the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes."}, {"text": ").Chatti, Muslim and Schroeder note that the aim of open learning analytics (OLA) is to improve learning effectiveness in lifelong learning environments. The authors refer to OLA as an ongoing analytics process that encompasses diversity at all four dimensions of the learning analytics reference model."}, {"text": "Much of the software that is currently used for learning analytics duplicates functionality of web analytics software, but applies it to learner interactions with content. Social network analysis tools are commonly used to map social connections and discussions. Some examples of learning analytics software tools include:"}, {"text": "Big data analytics has helped healthcare improve by providing personalized medicine and prescriptive analytics, clinical risk intervention and predictive analytics, waste and care variability reduction, automated external and internal reporting of patient data, standardized medical terms and patient registries and fragmented point solutions. Some areas of improvement are more aspirational than actually implemented. The level of data generated within healthcare systems is not trivial."}]}, {"question": "Is exponential distribution discrete or continuous", "positive_ctxs": [{"text": "It is very much like the exponential distribution, with \u03bb corresponding to 1/p, except that the geometric distribution is discrete while the exponential distribution is continuous."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The exponential distribution may be viewed as a continuous counterpart of the geometric distribution, which describes the number of Bernoulli trials necessary for a discrete process to change state. In contrast, the exponential distribution describes the time for a continuous process to change state."}, {"text": "Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part."}, {"text": "Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part."}, {"text": "Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part."}, {"text": "Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part."}, {"text": "Because the distribution of a continuous latent variable can be approximated by a discrete distribution, the distinction between continuous and discrete variables turns out not to be fundamental at all. Therefore, there may be a psychometrical latent variable, but not a psychological psychometric variable."}, {"text": "Because the distribution of a continuous latent variable can be approximated by a discrete distribution, the distinction between continuous and discrete variables turns out not to be fundamental at all. Therefore, there may be a psychometrical latent variable, but not a psychological psychometric variable."}]}, {"question": "What is distance measure in clustering", "positive_ctxs": [{"text": "For most common clustering software, the default distance measure is the Euclidean distance. Correlation-based distance considers two objects to be similar if their features are highly correlated, even though the observed values may be far apart in terms of Euclidean distance."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances."}, {"text": "Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances."}, {"text": "Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances."}, {"text": "Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances."}, {"text": "Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances."}, {"text": "Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances."}, {"text": "times the distance from the query to its nearest points. The appeal of this approach is that, in many cases, an approximate nearest neighbor is almost as good as the exact one. In particular, if the distance measure accurately captures the notion of user quality, then small differences in the distance should not matter."}]}, {"question": "Is a B testing ethical", "positive_ctxs": [{"text": "A/B tests are easy and seem harmless, but many consumers become disturbed when they find out they're being tested without knowing it. Some argue that A/B testing tracks along the same ethical lines as a product launch; others believe organizations\u200b must be transparent about their testing even if it seems harmless."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A formal philosophy of ethical calculus is a development in the study of ethics, combining elements of natural selection, self-organizing systems, emergence, and algorithm theory. According to ethical calculus, the most ethical course of action in a situation is an absolute, but rather than being based on a static ethical code, the ethical code itself is a function of circumstances. The optimal ethic is the best possible course of action taken by an individual with the given limitations."}, {"text": "3, which has a goat. He then says to you, \"Do you want to pick door No. Is it to your advantage to switch your choice?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Both C3 and a B cell's antibodies can bind to a pathogen, and when a B cell has its antibodies bind to a pathogen with C3, it speeds up that B cell's secretion of more antibodies and more C3, thus creating a positive feedback loop."}, {"text": "For example, if a person has dengue, they might have a 90% chance of testing positive for dengue. In this case, what is being measured is that if event B (\"having dengue\") has occurred, the probability of A (test is positive) given that B (having dengue) occurred is 90%: that is, P(A|B) = 90%. Alternatively, if a person tests positive for dengue, they may have only a 15% chance of actually having this rare disease, because the false positive rate for the test may be high."}, {"text": "The field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making. The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics: \"Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines. In all cases, only human beings have engaged in ethical reasoning."}]}, {"question": "Is Anova the same as linear regression", "positive_ctxs": [{"text": "From the mathematical point of view, linear regression and ANOVA are identical: both break down the total variance of the data into different \u201cportions\u201d and verify the equality of these \u201csub-variances\u201d by means of a test (\u201cF\u201d Test)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term, multivariate linear regression, refers to cases where y is a vector, i.e., the same as general linear regression."}, {"text": "Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term, multivariate linear regression, refers to cases where y is a vector, i.e., the same as general linear regression."}, {"text": "Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term, multivariate linear regression, refers to cases where y is a vector, i.e., the same as general linear regression."}, {"text": "Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term, multivariate linear regression, refers to cases where y is a vector, i.e., the same as general linear regression."}, {"text": "Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term, multivariate linear regression, refers to cases where y is a vector, i.e., the same as general linear regression."}, {"text": "Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term, multivariate linear regression, refers to cases where y is a vector, i.e., the same as general linear regression."}, {"text": "Hence, the outcome is either pi or 1 \u2212 pi, as in the previous line.Linear predictor functionThe basic idea of logistic regression is to use the mechanism already developed for linear regression by modeling the probability pi using a linear predictor function, i.e. a linear combination of the explanatory variables and a set of regression coefficients that are specific to the model at hand but the same for all trials."}]}, {"question": "How do you do a regression in Excel with multiple variables", "positive_ctxs": [{"text": "1:3610:15Suggested clip \u00b7 117 secondsConducting a Multiple Regression using Microsoft Excel Data YouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "It is a common practice to use a one-tailed hypothesis by default. However, \"If you do not have a specific direction firmly in mind in advance, use a two-sided alternative. Moreover, some users of statistics argue that we should always work with the two-sided alternative."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}]}, {"question": "What is the general linear model GLM Why does it matter", "positive_ctxs": [{"text": "The General Linear Model (GLM) is a useful framework for comparing how several variables affect different continuous variables. In it's simplest form, GLM is described as: Data = Model + Error (Rutherford, 2001, p.3) GLM is the foundation for several statistical tests, including ANOVA, ANCOVA and regression analysis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A simple, very important example of a generalized linear model (also an example of a general linear model) is linear regression. In linear regression, the use of the least-squares estimator is justified by the Gauss\u2013Markov theorem, which does not assume that the distribution is normal."}, {"text": "A simple, very important example of a generalized linear model (also an example of a general linear model) is linear regression. In linear regression, the use of the least-squares estimator is justified by the Gauss\u2013Markov theorem, which does not assume that the distribution is normal."}, {"text": "The linear-nonlinear-Poisson cascade model is a cascade of a linear filtering process followed by a nonlinear spike generation step. In the case that output spikes feed back, via a linear filtering process, we arrive at a model that is known in the neurosciences as Generalized Linear Model (GLM). The GLM is mathematically equivalent to the spike response model SRM) with escape noise; but whereas in the SRM the internal variables are interpreted as the membrane potential and the firing threshold, in the GLM the internal variables are abstract quantities that summarizes the net effect of input (and recent output spikes) before spikes are generated in the final step.External link:"}, {"text": "In statistics, the generalized linear model (GLM) is a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value."}, {"text": "In statistics, the generalized linear model (GLM) is a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value."}, {"text": "A log-linear model is a mathematical model that takes the form of a function whose logarithm equals a linear combination of the parameters of the model, which makes it possible to apply (possibly multivariate) linear regression. That is, it has the general form"}, {"text": "A log-linear model is a mathematical model that takes the form of a function whose logarithm equals a linear combination of the parameters of the model, which makes it possible to apply (possibly multivariate) linear regression. That is, it has the general form"}]}, {"question": "What is the difference between GloVe and word2vec", "positive_ctxs": [{"text": "Word2Vec takes texts as training data for a neural network. The resulting embedding captures whether words appear in similar contexts. GloVe focuses on words co-occurrences over the whole corpus. Its embeddings relate to the probabilities that two words appear together."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In psychophysical terms, the size difference between A and C is above the just noticeable difference ('jnd') while the size differences between A and B and B and C are below the jnd."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration."}, {"text": "the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration."}, {"text": "There is a simple difference formula to compute the rank-biserial correlation from the common language effect size: the correlation is the difference between the proportion of pairs favorable to the hypothesis (f) minus its complement (i.e. : the proportion that is unfavorable (u)). This simple difference formula is just the difference of the common language effect size of each group, and is as follows:"}, {"text": "There is a simple difference formula to compute the rank-biserial correlation from the common language effect size: the correlation is the difference between the proportion of pairs favorable to the hypothesis (f) minus its complement (i.e. : the proportion that is unfavorable (u)). This simple difference formula is just the difference of the common language effect size of each group, and is as follows:"}]}, {"question": "What does principal component analysis do", "positive_ctxs": [{"text": "Principal component analysis (PCA) is a technique for reducing the dimensionality of such datasets, increasing interpretability but at the same time minimizing information loss. It does so by creating new uncorrelated variables that successively maximize variance."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Multilinear principal component analysis (MPCA) is a multilinear extension of principal component analysis (PCA). MPCA is employed in the analysis of n-way arrays, i.e. a cube or hyper-cube of numbers, also informally referred to as a \"data tensor\"."}, {"text": "Sparse principal component analysis (sparse PCA) is a specialised technique used in statistical analysis and, in particular, in the analysis of multivariate data sets. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by introducing sparsity structures to the input variables."}, {"text": "In statistics, multiple correspondence analysis (MCA) is a data analysis technique for nominal categorical data, used to detect and represent underlying structures in a data set. It does this by representing data as points in a low-dimensional Euclidean space. The procedure thus appears to be the counterpart of principal component analysis for categorical data."}, {"text": "The linear combinations obtained using Fisher's linear discriminant are called Fisher faces, while those obtained using the related principal component analysis are called eigenfaces."}, {"text": "The linear combinations obtained using Fisher's linear discriminant are called Fisher faces, while those obtained using the related principal component analysis are called eigenfaces."}, {"text": "The linear combinations obtained using Fisher's linear discriminant are called Fisher faces, while those obtained using the related principal component analysis are called eigenfaces."}, {"text": "The linear combinations obtained using Fisher's linear discriminant are called Fisher faces, while those obtained using the related principal component analysis are called eigenfaces."}]}, {"question": "Which is typical of a positively skewed distribution", "positive_ctxs": [{"text": "In a positively skewed distribution, the mean is usually greater than the median because the few high scores tend to shift the mean to the right. In a positively skewed distribution, the mode is always less than the mean and median."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "positive skew: The right tail is longer; the mass of the distribution is concentrated on the left of the figure. The distribution is said to be right-skewed, right-tailed, or skewed to the right, despite the fact that the curve itself appears to be skewed or leaning to the left; right instead refers to the right tail being drawn out and, often, the mean being skewed to the right of a typical center of the data. A right-skewed distribution usually appears as a left-leaning curve."}, {"text": "positive skew: The right tail is longer; the mass of the distribution is concentrated on the left of the figure. The distribution is said to be right-skewed, right-tailed, or skewed to the right, despite the fact that the curve itself appears to be skewed or leaning to the left; right instead refers to the right tail being drawn out and, often, the mean being skewed to the right of a typical center of the data. A right-skewed distribution usually appears as a left-leaning curve."}, {"text": "negative skew: The left tail is longer; the mass of the distribution is concentrated on the right of the figure. The distribution is said to be left-skewed, left-tailed, or skewed to the left, despite the fact that the curve itself appears to be skewed or leaning to the right; left instead refers to the left tail being drawn out and, often, the mean being skewed to the left of a typical center of the data. A left-skewed distribution usually appears as a right-leaning curve."}, {"text": "negative skew: The left tail is longer; the mass of the distribution is concentrated on the right of the figure. The distribution is said to be left-skewed, left-tailed, or skewed to the left, despite the fact that the curve itself appears to be skewed or leaning to the right; left instead refers to the left tail being drawn out and, often, the mean being skewed to the left of a typical center of the data. A left-skewed distribution usually appears as a right-leaning curve."}, {"text": "A distribution that is skewed to the right (the tail of the distribution is longer on the right), will have a positive skewness."}, {"text": "Similarly, we can make the sequence positively skewed by adding a value far above the mean, which is probably a positive outlier, e.g. (49, 50, 51, 60), where the mean is 52.5, and the median is 50.5."}, {"text": "Similarly, we can make the sequence positively skewed by adding a value far above the mean, which is probably a positive outlier, e.g. (49, 50, 51, 60), where the mean is 52.5, and the median is 50.5."}]}, {"question": "What is forecasting in machine learning", "positive_ctxs": [{"text": "For years, people have been forecasting weather patterns, economic and political events, sports outcomes, and more. Because we try to predict so many different events, there are a wide variety of ways in which forecasts can be developed."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battlefields and was soon introduced to psychology to account for perceptual detection of stimuli. ROC analysis since then has been used in medicine, radiology, biometrics, forecasting of natural hazards, meteorology, model performance assessment, and other areas for many decades and is increasingly used in machine learning and data mining research."}, {"text": "The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battlefields and was soon introduced to psychology to account for perceptual detection of stimuli. ROC analysis since then has been used in medicine, radiology, biometrics, forecasting of natural hazards, meteorology, model performance assessment, and other areas for many decades and is increasingly used in machine learning and data mining research."}, {"text": "The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battlefields and was soon introduced to psychology to account for perceptual detection of stimuli. ROC analysis since then has been used in medicine, radiology, biometrics, forecasting of natural hazards, meteorology, model performance assessment, and other areas for many decades and is increasingly used in machine learning and data mining research."}, {"text": "The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battlefields and was soon introduced to psychology to account for perceptual detection of stimuli. ROC analysis since then has been used in medicine, radiology, biometrics, forecasting of natural hazards, meteorology, model performance assessment, and other areas for many decades and is increasingly used in machine learning and data mining research."}, {"text": "The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battlefields and was soon introduced to psychology to account for perceptual detection of stimuli. ROC analysis since then has been used in medicine, radiology, biometrics, forecasting of natural hazards, meteorology, model performance assessment, and other areas for many decades and is increasingly used in machine learning and data mining research."}, {"text": "The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battlefields and was soon introduced to psychology to account for perceptual detection of stimuli. ROC analysis since then has been used in medicine, radiology, biometrics, forecasting of natural hazards, meteorology, model performance assessment, and other areas for many decades and is increasingly used in machine learning and data mining research."}]}, {"question": "Which is an example of non parametric statistic", "positive_ctxs": [{"text": "Non parametric do not assume that the data is normally distributed. For example: the Kruskal Willis test is the non parametric alternative to the One way ANOVA and the Mann Whitney is the non parametric alternative to the two sample t test. The main nonparametric tests are: 1-sample sign test."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation \u03c72 test of association, a permutation version of Aly's test for comparing variances and so on."}, {"text": "Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation \u03c72 test of association, a permutation version of Aly's test for comparing variances and so on."}, {"text": "Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation \u03c72 test of association, a permutation version of Aly's test for comparing variances and so on."}, {"text": "Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation \u03c72 test of association, a permutation version of Aly's test for comparing variances and so on."}, {"text": "Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation \u03c72 test of association, a permutation version of Aly's test for comparing variances and so on."}, {"text": "An \"estimator\" or \"point estimate\" is a statistic (that is, a function of the data) that is used to infer the value of an unknown parameter in a statistical model. The parameter being estimated is sometimes called the estimand. It can be either finite-dimensional (in parametric and semi-parametric models), or infinite-dimensional (semi-parametric and non-parametric models)."}, {"text": "It is possible to make statistical inferences without assuming a particular parametric family of probability distributions. In that case, one speaks of non-parametric statistics as opposed to the parametric statistics just described. For example, a test based on Spearman's rank correlation coefficient would be called non-parametric since the statistic is computed from the rank-order of the data disregarding their actual values (and thus regardless of the distribution they were sampled from), whereas those based on the Pearson product-moment correlation coefficient are parametric tests since it is computed directly from the data values and thus estimates the parameter known as the population correlation."}]}, {"question": "What is threshold machine learning", "positive_ctxs": [{"text": "The decision for converting a predicted probability or scoring into a class label is governed by a parameter referred to as the \u201cdecision threshold,\u201d \u201cdiscrimination threshold,\u201d or simply the \u201cthreshold.\u201d The default value for the threshold is 0.5 for normalized predicted probabilities or scores in the range between 0"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "How do you calculate weighted mean", "positive_ctxs": [{"text": "SummaryWeighted Mean: A mean where some values contribute more than others.When the weights add to 1: just multiply each weight by the matching value and sum it all up.Otherwise, multiply each weight w by its matching value x, sum that all up, and divide by the sum of weights: Weighted Mean = \u03a3wx\u03a3w."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "When the percentage of points to discard does not yield a whole number, the trimmed mean may be defined by interpolation, generally linear interpolation, between the nearest whole numbers. For example, if you need to calculate the 15% trimmed mean of a sample containing 10 entries, strictly this would mean discarding 1 point from each end (equivalent to the 10% trimmed mean). If interpolating, one would instead compute the 10% trimmed mean (discarding 1 point from each end) and the 20% trimmed mean (discarding 2 points from each end), and then interpolating, in this case averaging these two values."}, {"text": "Aspect is unusual in ASL in that transitive verbs derived for aspect lose their transitivity. That is, while you can sign 'dog chew bone' for the dog chewed on a bone, or 'she look-at me' for she looked at me, you cannot do the same in the durative to mean the dog gnawed on the bone or she stared at me. Instead, you must use other strategies, such as a topic construction (see below) to avoid having an object for the verb."}, {"text": "To calculate decimal odds, you can use the equation Return = Initial Wager x Decimal Value. For example, if you bet \u20ac100 on Liverpool to beat Manchester City at 2.00 odds you would win \u20ac200 (\u20ac100 x 2.00). Decimal odds are favoured by betting exchanges because they are the easiest to work with for trading, as they reflect the inverse of the probability of an outcome."}]}, {"question": "How do you split your data between training and validation", "positive_ctxs": [{"text": "Try a series of runs with different amounts of training data: randomly sample 20% of it, say, 10 times and observe performance on the validation data, then do the same with 40%, 60%, 80%. You should see both greater performance with more data, but also lower variance across the different random samples."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Suppose, for example, you have a very imbalanced validation set made of 100 elements, 95 of which are positive elements, and only 5 are negative elements (as explained in Tip 5). And suppose also you made some mistakes in designing and training your machine learning classifier, and now you have an algorithm which always predicts positive. Imagine that you are not aware of this issue."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "[I]t is quite unnatural to model applications in terms of genetic operators like mutation and crossover on bit strings. The pseudobiology adds another level of complexity between you and your problem. Second, genetic algorithms take a very long time on nontrivial problems."}]}, {"question": "What is the point estimate of the population standard deviation", "positive_ctxs": [{"text": "The sample standard deviation (s) is a point estimate of the population standard deviation (\u03c3). The sample mean (\u0304x) is a point estimate of the population mean, \u03bc The sample variance (s2 is a point estimate of the population variance (\u03c32)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": ", can be used as an estimate of the mean parameter (estimand), denoted \u03bc, of the population from which the sample was drawn. Similarly, the sample variance (estimator), denoted S2, can be used to estimate the variance parameter (estimand), denoted \u03c32, of the population from which the sample was drawn. (Note that the sample standard deviation (S) is not an unbiased estimate of the population standard deviation (\u03c3): see Unbiased estimation of standard deviation.)"}, {"text": "One can find the standard deviation of an entire population in cases (such as standardized testing) where every member of a population is sampled. In cases where that cannot be done, the standard deviation \u03c3 is estimated by examining a random sample taken from the population and computing a statistic of the sample, which is used as an estimate of the population standard deviation. Such a statistic is called an estimator, and the estimator (or the value of the estimator, namely the estimate) is called a sample standard deviation, and is denoted by s (possibly with modifiers)."}, {"text": "One can find the standard deviation of an entire population in cases (such as standardized testing) where every member of a population is sampled. In cases where that cannot be done, the standard deviation \u03c3 is estimated by examining a random sample taken from the population and computing a statistic of the sample, which is used as an estimate of the population standard deviation. Such a statistic is called an estimator, and the estimator (or the value of the estimator, namely the estimate) is called a sample standard deviation, and is denoted by s (possibly with modifiers)."}, {"text": "The standard deviation of a population or sample and the standard error of a statistic (e.g., of the sample mean) are quite different, but related. The sample mean's standard error is the standard deviation of the set of means that would be found by drawing an infinite number of repeated samples from the population and computing a mean for each sample. The mean's standard error turns out to equal the population standard deviation divided by the square root of the sample size, and is estimated by using the sample standard deviation divided by the square root of the sample size."}]}, {"question": "What is Bayesian sampling", "positive_ctxs": [{"text": "In a nutshell, the goal of Bayesian inference is to maintain a full posterior probability distribution over a set of random variables. Sampling algorithms based on Monte Carlo Markov Chain (MCMC) techniques are one possible way to go about inference in such models."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "To the extent that Bayesian calculations include prior information, it is therefore essentially inevitable that their results will not be \"unbiased\" in sampling theory terms."}, {"text": "To the extent that Bayesian calculations include prior information, it is therefore essentially inevitable that their results will not be \"unbiased\" in sampling theory terms."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The first application to Markov decision processes was in 2000. A related approach (see Bayesian control rule) was published in 2010. In 2010 it was also shown that Thompson sampling is instantaneously self-correcting."}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}]}, {"question": "What are the steps in designing a machine learning problem", "positive_ctxs": [{"text": "The 7 Steps of Machine Learning1 - Data Collection. The quantity & quality of your data dictate how accurate our model is. 2 - Data Preparation. Wrangle data and prepare it for training. 3 - Choose a Model. 4 - Train the Model. 5 - Evaluate the Model. 6 - Parameter Tuning. 7 - Make Predictions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Statistical classification is a problem studied in machine learning. It is a type of supervised learning, a method of machine learning where the categories are predefined, and is used to categorize new probabilistic observations into said categories. When there are only two categories the problem is known as statistical binary classification."}, {"text": "Keeping in mind that LCS is a paradigm for genetic-based machine learning rather than a specific method, the following outlines key elements of a generic, modern (i.e. For simplicity let us focus on Michigan-style architecture with supervised learning. See the illustrations on the right laying out the sequential steps involved in this type of generic LCS."}, {"text": "A central goal in designing a machine learning system is to guarantee that the learning algorithm will generalize, or perform accurately on new examples after being trained on a finite number of them. In the 1990s, milestones were reached in obtaining generalization bounds for supervised learning algorithms. The technique historically used to prove generalization was to show that an algorithm was consistent, using the uniform convergence properties of empirical quantities to their means."}, {"text": "Parity learning is a problem in machine learning. An algorithm that solves this problem must find a function \u0192, given some samples (x, \u0192(x)) and the assurance that \u0192 computes the parity of bits at some fixed locations. The samples are generated using some distribution over the input."}, {"text": "Zero-shot learning (ZSL) is a problem setup in machine learning, where at test time, a learner observes samples from classes that were not observed during training, and needs to predict the category they belong to. This problem is widely studied in computer vision, natural language processing and machine perception."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "The steps for designing explicit, evidence-based guidelines were described in the late 1980s: Formulate the question (population, intervention, comparison intervention, outcomes, time horizon, setting); search the literature to identify studies that inform the question; interpret each study to determine precisely what it says about the question; if several studies address the question, synthesize their results (meta-analysis); summarize the evidence in \"evidence tables\"; compare the benefits, harms and costs in a \"balance sheet\"; draw a conclusion about the preferred practice; write the guideline; write the rationale for the guideline; have others review each of the previous steps; implement the guideline.For the purposes of medical education and individual-level decision making, five steps of EBM in practice were described in 1992 and the experience of delegates attending the 2003 Conference of Evidence-Based Health Care Teachers and Developers was summarized into five steps and published in 2005. This five step process can broadly be categorized as:"}]}, {"question": "What is blob in OBject detection", "positive_ctxs": [{"text": "A Blob is a group of connected pixels in an image that share some common property ( E.g grayscale value ). In the image above, the dark connected regions are blobs, and the goal of blob detection is to identify and mark these regions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In computer vision, blob detection methods are aimed at detecting regions in a digital image that differ in properties, such as brightness or color, compared to surrounding regions. Informally, a blob is a region of an image in which some properties are constant or approximately constant; all the points in a blob can be considered in some sense to be similar to each other. The most common method for blob detection is convolution."}, {"text": "There are several motivations for studying and developing blob detectors. One main reason is to provide complementary information about regions, which is not obtained from edge detectors or corner detectors. In early work in the area, blob detection was used to obtain regions of interest for further processing."}, {"text": "Usually those methods consist of two parts. The first stage is to detect interest points, fiducial markers or optical flow in the camera images. This step can use feature detection methods like corner detection, blob detection, edge detection or thresholding, and other image processing methods."}, {"text": "However, it is rather straightforward to extend this approach to other types of watershed constructions. For example, by proceeding beyond the first delimiting saddle point a \"grey-level blob tree\" can be constructed. Moreover, the grey-level blob detection method was embedded in a scale space representation and performed at all levels of scale, resulting in a representation called the scale-space primal sketch."}, {"text": "These regions could signal the presence of objects or parts of objects in the image domain with application to object recognition and/or object tracking. In other domains, such as histogram analysis, blob descriptors can also be used for peak detection with application to segmentation. Another common use of blob descriptors is as main primitives for texture analysis and texture recognition."}, {"text": "Among the approaches that are used to feature description, one can mention N-jets and local histograms (see scale-invariant feature transform for one example of a local histogram descriptor). In addition to such attribute information, the feature detection step by itself may also provide complementary attributes, such as the edge orientation and gradient magnitude in edge detection and the polarity and the strength of the blob in blob detection."}, {"text": "\"Feature detection with automatic scale selection\" (abstract). International Journal of Computer Vision. (Laplacian and determinant of Hessian blob detection as well as automatic scale selection)"}]}, {"question": "Which algorithm is used for sentiment analysis", "positive_ctxs": [{"text": "Overall, Sentiment analysis may involve the following types of classification algorithms: Linear Regression. Naive Bayes. Support Vector Machines."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This allows movement to a more sophisticated understanding of sentiment, because it is now possible to adjust the sentiment value of a concept relative to modifications that may surround it. Words, for example, that intensify, relax or negate the sentiment expressed by the concept can affect its score. Alternatively, texts can be given a positive and negative sentiment strength score if the goal is to determine the sentiment in a text rather than the overall polarity and strength of the text.There are various other types of sentiment analysis like- Aspect Based sentiment analysis, Grading sentiment analysis (positive,negative,neutral), Multilingual sentiment analysis and detection of emotions."}, {"text": "In general, the utility for practical commercial tasks of sentiment analysis as it is defined in academic research has been called into question, mostly since the simple one-dimensional model of sentiment from negative to positive yields rather little actionable information for a client worrying about the effect of public discourse on e.g. brand or corporate reputation.To better fit market needs, evaluation of sentiment analysis has moved to more task-based measures, formulated together with representatives from PR agencies and market research professionals. the RepLab evaluation data set is less on the content of the text under consideration and more on the effect of the text in question on brand reputation.Because evaluation of sentiment analysis is becoming more and more task based, each implementation needs a separate training model to get a more accurate representation of sentiment for a given data set."}, {"text": "Clearly, the high evaluated item should be recommended to the user. Based on these two motivations, a combination ranking score of similarity and sentiment rating can be constructed for each candidate item.Except for the difficulty of the sentiment analysis itself, applying sentiment analysis on reviews or feedback also faces the challenge of spam and biased reviews. One direction of work is focused on evaluating the helpfulness of each review."}, {"text": "The CyberEmotions project, for instance, recently identified the role of negative emotions in driving social networks discussions.The problem is that most sentiment analysis algorithms use simple terms to express sentiment about a product or service. However, cultural factors, linguistic nuances, and differing contexts make it extremely difficult to turn a string of written text into a simple pro or con sentiment. The fact that humans often disagree on the sentiment of text illustrates how big a task it is for computers to get this right."}, {"text": "A basic task in sentiment analysis is classifying the polarity of a given text at the document, sentence, or feature/aspect level\u2014whether the expressed opinion in a document, a sentence or an entity feature/aspect is positive, negative, or neutral. Advanced, \"beyond polarity\" sentiment classification looks, for instance, at emotional states such as \"angry\", \"sad\", and \"happy\".Precursors to sentimental analysis include the General Inquirer, which provided hints toward quantifying patterns in text and, separately, psychological research that examined a person's psychological state based on analysis of their verbal behavior.Subsequently, the method described in a patent by Volcani and Fogel, looked specifically at sentiment and identified individual words and phrases in text with respect to different emotional scales. A current system based on their work, called EffectCheck, presents synonyms that can be used to increase or decrease the level of evoked emotion in each scale."}, {"text": "Also, the problem of sentiment analysis is non-monotonic in respect to sentence extension and stop-word substitution (compare THEY would not let my dog stay in this hotel vs I would not let my dog stay in this hotel). To address this issue a number of rule-based and reasoning-based approaches have been applied to sentiment analysis, including defeasible logic programming. Also, there is a number of tree traversal rules applied to syntactic parse tree to extract the topicality of sentiment in open domain setting."}, {"text": "Perhaps the most widely used algorithm for manifold learning is kernel PCA. It is a combination of Principal component analysis and the kernel trick. PCA begins by computing the covariance matrix of the"}]}, {"question": "How do you find the critical value and rejection region", "positive_ctxs": [{"text": "One or two of the sections is the \u201crejection region\u201c; if your test value falls into that region, then you reject the null hypothesis. A one tailed test with the rejection rejection in one tail. The critical value is the red line to the left of that region."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}]}, {"question": "How do you find the skew of a distribution", "positive_ctxs": [{"text": "Calculation. The formula given in most textbooks is Skew = 3 * (Mean \u2013 Median) / Standard Deviation. This is known as an alternative Pearson Mode Skewness. You could calculate skew by hand."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "If n is large enough, then the skew of the distribution is not too great. In this case a reasonable approximation to B(n, p) is given by the normal distribution"}, {"text": "If n is large enough, then the skew of the distribution is not too great. In this case a reasonable approximation to B(n, p) is given by the normal distribution"}, {"text": "For example, in the distribution of adult residents across US households, the skew is to the right. However, since the majority of cases is less than or equal to the mode, which is also the median, the mean sits in the heavier left tail. As a result, the rule of thumb that the mean is right of the median under right skew failed."}, {"text": "For example, in the distribution of adult residents across US households, the skew is to the right. However, since the majority of cases is less than or equal to the mode, which is also the median, the mean sits in the heavier left tail. As a result, the rule of thumb that the mean is right of the median under right skew failed."}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}]}, {"question": "What is class boundary in frequency distribution", "positive_ctxs": [{"text": "Class Boundaries. Separate one class in a grouped frequency distribution from another. The boundaries have one more decimal place than the raw data and therefore do not appear in the data. There is no gap between the upper boundary of one class and the lower boundary of the next class."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A frequency distribution table is an arrangement of the values that one or more variables take in a sample. Each entry in the table contains the frequency or count of the occurrences of values within a particular group or interval, and in this way, the table summarizes the distribution of values in the sample. An example is shown below"}, {"text": "A frequency distribution table is an arrangement of the values that one or more variables take in a sample. Each entry in the table contains the frequency or count of the occurrences of values within a particular group or interval, and in this way, the table summarizes the distribution of values in the sample. An example is shown below"}, {"text": "In statistics, a frequency distribution is a list, table or graph that displays the frequency of various outcomes in a sample. Each entry in the table contains the frequency or count of the occurrences of values within a particular group or interval."}, {"text": "In statistics, a frequency distribution is a list, table or graph that displays the frequency of various outcomes in a sample. Each entry in the table contains the frequency or count of the occurrences of values within a particular group or interval."}, {"text": "In other words, a histogram represents a frequency distribution by means of rectangles whose widths represent class intervals and whose areas are proportional to the corresponding frequencies: the height of each is the average frequency density for the interval. The intervals are placed together in order to show that the data represented by the histogram, while exclusive, is also contiguous. (E.g., in a histogram it is possible to have two connecting intervals of 10.5\u201320.5 and 20.5\u201333.5, but not two connecting intervals of 10.5\u201320.5 and 22.5\u201332.5."}, {"text": "In other words, a histogram represents a frequency distribution by means of rectangles whose widths represent class intervals and whose areas are proportional to the corresponding frequencies: the height of each is the average frequency density for the interval. The intervals are placed together in order to show that the data represented by the histogram, while exclusive, is also contiguous. (E.g., in a histogram it is possible to have two connecting intervals of 10.5\u201320.5 and 20.5\u201333.5, but not two connecting intervals of 10.5\u201320.5 and 22.5\u201332.5."}, {"text": "In mathematics, in the field of differential equations, a boundary value problem is a differential equation together with a set of additional constraints, called the boundary conditions. A solution to a boundary value problem is a solution to the differential equation which also satisfies the boundary conditions."}]}, {"question": "What is the main challenge s of NLP", "positive_ctxs": [{"text": "Ambiguity. The main challenge of NLP is the understanding and modeling of elements within a variable context. In a natural language, words are unique but can have different meanings depending on the context resulting in ambiguity on the lexical, syntactic, and semantic levels."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}]}, {"question": "What is deep learning and how does it relate to AI", "positive_ctxs": [{"text": "Deep learning is an AI function that mimics the workings of the human brain in processing data for use in detecting objects, recognizing speech, translating languages, and making decisions. Deep learning AI is able to learn without human supervision, drawing from data that is both unstructured and unlabeled."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The following is a pictorial summary of Sniedovich's (2007) discussion on local vs global robustness. For illustrative purposes it is cast here as a Treasure Hunt. It shows how the elements of info-gap's robustness model relate to one another and how the severe uncertainty is treated in the model."}, {"text": "Design is the abstraction and specification of patterns and organs of functionality that have been or will be implemented. Architecture is a degree higher in both abstraction and granularity. Consequentially, architecture is also more topological in nature than design, in that it specifies where major components meet and how they relate to one another."}, {"text": "The study of animal locomotion is a branch of biology that investigates and quantifies how animals move. It is an application of kinematics, used to understand how the movements of animal limbs relate to the motion of the whole animal, for instance when walking or flying."}, {"text": "Explainable AI to detect algorithm Bias is a suggested way to detect the existence of bias in an algorithm or learning model. Using machine learning to detect bias is called, \"conducting an AI audit\", where the \"auditor\" is an algorithm that goes through the AI model and the training data to identify biases.Currently, a new IEEE standard is being drafted that aims to specify methodologies which help creators of algorithms eliminate issues of bias and articulate transparency (i.e. to authorities or end users) about the function and possible effects of their algorithms."}, {"text": "Explainable AI to detect algorithm Bias is a suggested way to detect the existence of bias in an algorithm or learning model. Using machine learning to detect bias is called, \"conducting an AI audit\", where the \"auditor\" is an algorithm that goes through the AI model and the training data to identify biases.Currently, a new IEEE standard is being drafted that aims to specify methodologies which help creators of algorithms eliminate issues of bias and articulate transparency (i.e. to authorities or end users) about the function and possible effects of their algorithms."}, {"text": "Chance normalized versions of recall, precision and G-measure correspond to Informedness, Markedness and Matthews Correlation and relate strongly to Kappa.The mutual information is an information theoretic measure of how much information is shared between a clustering and a ground-truth classification that can detect a non-linear similarity between two clusterings. Normalized mutual information is a family of corrected-for-chance variants of this that has a reduced bias for varying cluster numbers."}, {"text": "Chance normalized versions of recall, precision and G-measure correspond to Informedness, Markedness and Matthews Correlation and relate strongly to Kappa.The mutual information is an information theoretic measure of how much information is shared between a clustering and a ground-truth classification that can detect a non-linear similarity between two clusterings. Normalized mutual information is a family of corrected-for-chance variants of this that has a reduced bias for varying cluster numbers."}]}, {"question": "How do you find the moment generating function of a geometric distribution", "positive_ctxs": [{"text": "0:294:16Suggested clip \u00b7 116 secondsGeometric distribution moment generating function - YouTubeYouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "th moment of the function given in the brackets. This identity follows by the convolution theorem for moment generating function and applying the chain-rule for differentiating a product."}, {"text": "However, the log-normal distribution is not determined by its moments. This implies that it cannot have a defined moment generating function in a neighborhood of zero."}, {"text": "If f is a probability density function, then the value of the integral above is called the n-th moment of the probability distribution. More generally, if F is a cumulative probability distribution function of any probability distribution, which may not have a density function, then the n-th moment of the probability distribution is given by the Riemann\u2013Stieltjes integral"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Johnson considered the distribution of the logit - transformed variable ln(X/1\u2212X), including its moment generating function and approximations for large values of the shape parameters. This transformation extends the finite support [0, 1] based on the original variable X to infinite support in both directions of the real line (\u2212\u221e, +\u221e)."}, {"text": "the characteristic function is the moment-generating function of iX or the moment generating function of X evaluated on the imaginary axis. This function can also be viewed as the Fourier transform of the probability density function, which can therefore be deduced from it by inverse Fourier transform."}, {"text": "The fourth central moment is a measure of the heaviness of the tail of the distribution, compared to the normal distribution of the same variance. Since it is the expectation of a fourth power, the fourth central moment, where defined, is always nonnegative; and except for a point distribution, it is always strictly positive. The fourth central moment of a normal distribution is 3\u03c34."}]}, {"question": "What are the disadvantages of transfer learning", "positive_ctxs": [{"text": "The biggest negative of transfer learning is that it's very hard to do right and very easy to mess up. Especially in NLP this kind of approach has only been mainstream for about a year, which just isn't enough time when model runs take weeks."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "where sPi are the N roots of the characteristic polynomial and will therefore be the poles of the transfer function. Consider the case of a transfer function with a single pole"}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "Both positive and negative transfer learning was experimentally demonstrated.In 1993, Lorien Pratt published a paper on transfer in machine learning, formulating the discriminability-based transfer (DBT) algorithm.In 1997, the journal Machine Learning published a special issue devoted to transfer learning, and by 1998, the field had advanced to include multi-task learning, along with a more formal analysis of its theoretical foundations. Learning to Learn, edited by Pratt and Sebastian Thrun, is a 1998 review of the subject."}, {"text": "The dimensions and units of the transfer function model the output response of the device for a range of possible inputs. For example, the transfer function of a two-port electronic circuit like an amplifier might be a two-dimensional graph of the scalar voltage at the output as a function of the scalar voltage applied to the input; the transfer function of an electromechanical actuator might be the mechanical displacement of the movable arm as a function of electrical current applied to the device; the transfer function of a photodetector might be the output voltage as a function of the luminous intensity of incident light of a given wavelength."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Taking that as the definition of the transfer function requires careful disambiguation between complex vs. real values, which is traditionally influenced by the interpretation of abs(H(s)) as the gain and -atan(H(s)) as the phase lag. Other definitions of the transfer function are used: for example"}, {"text": "Knowledge transfer by model parameters: One set of algorithms for one-shot learning achieves knowledge transfer through the reuse of model parameters, based on the similarity between previously and newly learned classes. Classes of objects are first learned on numerous training examples, then new object classes are learned using transformations of model parameters from the previously learnt classes or selecting relevant parameters for a classifier as in M. Fink, 2004."}]}, {"question": "Is the Monty Hall problem correct", "positive_ctxs": [{"text": "The Monty Hall problem is one of those rare curiosities \u2013 a mathematical problem that has made the front pages of national news. Everyone now knows, or thinks they know, the answer but a realistic look at the problem demonstrates that the standard mathematician's answer is wrong."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Probability and the Monty Hall problem\", BBC News Magazine, 11 September 2013 (video). Mathematician Marcus du Sautoy explains the Monty Hall paradox."}, {"text": "The problem is actually an extrapolation from the game show. Monty Hall did open a wrong door to build excitement, but offered a known lesser prize \u2013 such as $100 cash \u2013 rather than a choice to switch doors. As Monty Hall wrote to Selvin:"}, {"text": "Steve Selvin posed the Monty Hall problem in a pair of letters to the American Statistician in 1975. The first letter presented the problem in a version close to its presentation in Parade 15 years later. The second appears to be the first use of the term \"Monty Hall problem\"."}, {"text": "Paul Erd\u0151s, one of the most prolific mathematicians in history, remained unconvinced until he was shown a computer simulation demonstrating vos Savant's predicted result.The problem is a paradox of the veridical type, because the correct choice (that one should switch doors) is so counterintuitive it can seem absurd, but is nevertheless demonstrably true. The Monty Hall problem is mathematically closely related to the earlier Three Prisoners problem and to the much older Bertrand's box paradox."}, {"text": "The Monty Hall problem is a brain teaser, in the form of a probability puzzle, loosely based on the American television game show Let's Make a Deal and named after its original host, Monty Hall. The problem was originally posed (and solved) in a letter by Steve Selvin to the American Statistician in 1975. It became famous as a question from a reader's letter quoted in Marilyn vos Savant's \"Ask Marilyn\" column in Parade magazine in 1990:"}, {"text": "Mueser, Peter R. & Granberg, Donald (May 1999). \"The Monty Hall Dilemma Revisited: Understanding the Interaction of Problem Definition and Decision Making\"."}, {"text": "Herbranson, W. T. & Schroeder, J. \"Are birds smarter than mathematicians? Pigeons (Columba livia) perform optimally on a version of the Monty Hall Dilemma\"."}]}, {"question": "Does a t distribution have a normal distribution", "positive_ctxs": [{"text": "The t distribution is therefore leptokurtic. The t distribution approaches the normal distribution as the degrees of freedom increase. Since the t distribution is leptokurtic, the percentage of the distribution within 1.96 standard deviations of the mean is less than the 95% for the normal distribution."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Derive the distribution of the test statistic under the null hypothesis from the assumptions. In standard cases this will be a well-known result. For example, the test statistic might follow a Student's t distribution with known degrees of freedom, or a normal distribution with known mean and variance."}, {"text": "Derive the distribution of the test statistic under the null hypothesis from the assumptions. In standard cases this will be a well-known result. For example, the test statistic might follow a Student's t distribution with known degrees of freedom, or a normal distribution with known mean and variance."}, {"text": "Derive the distribution of the test statistic under the null hypothesis from the assumptions. In standard cases this will be a well-known result. For example, the test statistic might follow a Student's t distribution with known degrees of freedom, or a normal distribution with known mean and variance."}, {"text": "Derive the distribution of the test statistic under the null hypothesis from the assumptions. In standard cases this will be a well-known result. For example, the test statistic might follow a Student's t distribution with known degrees of freedom, or a normal distribution with known mean and variance."}, {"text": "Derive the distribution of the test statistic under the null hypothesis from the assumptions. In standard cases this will be a well-known result. For example, the test statistic might follow a Student's t distribution with known degrees of freedom, or a normal distribution with known mean and variance."}, {"text": "Derive the distribution of the test statistic under the null hypothesis from the assumptions. In standard cases this will be a well-known result. For example, the test statistic might follow a Student's t distribution with known degrees of freedom, or a normal distribution with known mean and variance."}, {"text": "Derive the distribution of the test statistic under the null hypothesis from the assumptions. In standard cases this will be a well-known result. For example, the test statistic might follow a Student's t distribution with known degrees of freedom, or a normal distribution with known mean and variance."}]}, {"question": "How do I extract information from a text", "positive_ctxs": [{"text": "Let's explore 5 common techniques used for extracting information from the above text.Named Entity Recognition. The most basic and useful technique in NLP is extracting the entities in the text. Sentiment Analysis. Text Summarization. Aspect Mining. Topic Modeling."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "A fundamental objection is that ANNs do not sufficiently reflect neuronal function. Backpropagation is a critical step, although no such mechanism exists in biological neural networks. How information is coded by real neurons is not known."}, {"text": "A fundamental objection is that ANNs do not sufficiently reflect neuronal function. Backpropagation is a critical step, although no such mechanism exists in biological neural networks. How information is coded by real neurons is not known."}, {"text": "I answer that with a resounding, yes. As part of my evidence, I consider testimony from journalists themselves. ... [A] solid majority of journalists do allow their political ideology to influence their reporting."}, {"text": "Techniques such as data mining, natural language processing (NLP), and text analytics provide different methods to find patterns in, or otherwise interpret, this information. Common techniques for structuring text usually involve manual tagging with metadata or part-of-speech tagging for further text mining-based structuring. The Unstructured Information Management Architecture (UIMA) standard provided a common framework for processing this information to extract meaning and create structured data about the information.Software that creates machine-processable structure can utilize the linguistic, auditory, and visual structure that exist in all forms of human communication."}, {"text": "Syntactic or structural ambiguities are frequently found in humor and advertising. One of the most enduring jokes from the famous comedian Groucho Marx was his quip that used a modifier attachment ambiguity: \"I shot an elephant in my pajamas. How he got into my pajamas I don't know.\""}, {"text": "Classifier4J - Classifier4J is a Java library designed to do text classification. It comes with an implementation of a Bayesian classifier."}]}, {"question": "What is Optuna", "positive_ctxs": [{"text": "Optuna is an automated hyperparameter optimization software framework that is knowingly invented for the machine learning-based tasks. It emphasizes an authoritative, define-by-run approach user API."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}, {"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}]}, {"question": "What is fitting in machine learning", "positive_ctxs": [{"text": "Model fitting is a measure of how well a machine learning model generalizes to similar data to that on which it was trained. During the fitting process, you run an algorithm on data for which you know the target variable, known as \u201clabeled\u201d data, and produce a machine learning model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "Does the central limit theorem apply to discrete random variables", "positive_ctxs": [{"text": "The central limit theorem states that the CDF of Zn converges to the standard normal CDF. converges in distribution to the standard normal random variable as n goes to infinity, that is limn\u2192\u221eP(Zn\u2264x)=\u03a6(x), for all x\u2208R, The Xi's can be discrete, continuous, or mixed random variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Nevertheless, we can apply the central limit theorem to derive their asymptotic properties as sample size n goes to infinity. While the sample size is necessarily finite, it is customary to assume that n is \"large enough\" so that the true distribution of the OLS estimator is close to its asymptotic limit."}, {"text": "Nevertheless, we can apply the central limit theorem to derive their asymptotic properties as sample size n goes to infinity. While the sample size is necessarily finite, it is customary to assume that n is \"large enough\" so that the true distribution of the OLS estimator is close to its asymptotic limit."}, {"text": "The convergence of a random walk toward the Wiener process is controlled by the central limit theorem, and by Donsker's theorem. For a particle in a known fixed position at t = 0, the central limit theorem tells us that after a large number of independent steps in the random walk, the walker's position is distributed according to a normal distribution of total variance:"}, {"text": "As a direct generalization, one can consider random walks on crystal lattices (infinite-fold abelian covering graphs over finite graphs). Actually it is possible to establish the central limit theorem and large deviation theorem in this setting."}, {"text": "Many test statistics, scores, and estimators encountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use of influence functions. The central limit theorem implies that those statistical parameters will have asymptotically normal distributions."}, {"text": "Many test statistics, scores, and estimators encountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use of influence functions. The central limit theorem implies that those statistical parameters will have asymptotically normal distributions."}, {"text": "Many test statistics, scores, and estimators encountered in practice contain sums of certain random variables in them, and even more estimators can be represented as sums of random variables through the use of influence functions. The central limit theorem implies that those statistical parameters will have asymptotically normal distributions."}]}, {"question": "How do you solve a multinomial distribution", "positive_ctxs": [{"text": "0:315:15Suggested clip \u00b7 110 secondsMultinomial Distributions: Examples (Basic Probability and Statistics YouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The goal of equivalence testing is to establish the agreement between a theoretical multinomial distribution and observed counting frequencies. The theoretical distribution may be a fully specified multinomial distribution or a parametric family of multinomial distributions."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Example: On a 1-5 scale where 1 means disagree completely and 5 means agree completely, how much do you agree with the following statement. \"The Federal government should do more to help people facing foreclosure on their homes. \"A multinomial discrete-choice model can examine the responses to these questions (model G, model H, model I)."}, {"text": "where I is the indicator function. Then Y has a distribution which is a special case of the multinomial distribution with parameter"}, {"text": "Mult() is a multinomial distribution over a single observation (equivalent to a categorical distribution). The state space is a \"one-of-K\" representation, i.e."}, {"text": "In some fields such as natural language processing, categorical and multinomial distributions are synonymous and it is common to speak of a multinomial distribution when a categorical distribution is actually meant. This stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a \"1-of-K\" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range"}]}, {"question": "What is difference between regression and classification", "positive_ctxs": [{"text": "The most significant difference between regression vs classification is that while regression helps predict a continuous quantity, classification predicts discrete class labels. There are also some overlaps between the two types of machine learning algorithms."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The critical difference between AIC and BIC (and their variants) is the asymptotic property under well-specified and misspecified model classes. Their fundamental differences have been well-studied in regression variable selection and autoregression order selection problems. In general, if the goal is prediction, AIC and leave-one-out cross-validations are preferred."}, {"text": "Another measure that is used with correlation differences is Cohen's q. This is the difference between two Fisher transformed Pearson regression coefficients."}, {"text": "In psychophysical terms, the size difference between A and C is above the just noticeable difference ('jnd') while the size differences between A and B and B and C are below the jnd."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "From nearest neighbor couplings, the difference in energy between all spins equal and staggered spins is 8J. The difference in energy between all spins equal and nonstaggered but net zero spin is 4J. Ignoring four-spin interactions, a reasonable truncation is the average of these two energies or 6J."}]}, {"question": "Is pore strip bad", "positive_ctxs": [{"text": "Not only are nose strips bad for those with sensitive skin, they also worsen other skin conditions. Pore strips exacerbate rosacea-prone skin , especially if they contain irritating ingredients like alcohol and astringents. They also aggravate extremely dry skin, eczema and psoriasis ."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Note that the strip and slab need not be perpendicular to the vector, hence can be narrower or thinner than the length of the vector."}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "Molasses Number is a measure of the degree of decolorization of a standard molasses solution that has been diluted and standardized against standardized activated carbon. Due to the size of color bodies, the molasses number represents the potential pore volume available for larger adsorbing species. As all of the pore volume may not be available for adsorption in a particular waste water application, and as some of the adsorbate may enter smaller pores, it is not a good measure of the worth of a particular activated carbon for a specific application."}, {"text": "Consequential \u2013 What are the potential risks if the scores are invalid or inappropriately interpreted? Is the test still worthwhile given the risks?"}, {"text": "Consequential \u2013 What are the potential risks if the scores are invalid or inappropriately interpreted? Is the test still worthwhile given the risks?"}, {"text": "Is the yield of good cookies affected by the baking temperature and time in the oven? The table shows data for 8 batches of cookies."}]}, {"question": "What is K means algorithm with example", "positive_ctxs": [{"text": "If k is given, the K-means algorithm can be executed in the following steps: Partition of objects into k non-empty subsets. Identifying the cluster centroids (mean point) of the current partition. Compute the distances from each point and allot points to the cluster where the distance from the centroid is minimum."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}, {"text": "The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged)."}]}, {"question": "How do you describe the sampling distribution", "positive_ctxs": [{"text": "A sampling distribution is where you take a population (N), and find a statistic from that population. This is repeated for all possible samples from the population. Example: You hold a survey about college student's GRE scores and calculate that the standard deviation is 1."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Galton coined the term \"regression\" to describe an observable fact in the inheritance of multi-factorial quantitative genetic traits: namely that the offspring of parents who lie at the tails of the distribution will tend to lie closer to the centre, the mean, of the distribution. He quantified this trend, and in doing so invented linear regression analysis, thus laying the groundwork for much of modern statistical modelling. Since then, the term \"regression\" has taken on a variety of meanings, and it may be used by modern statisticians to describe phenomena of sampling bias which have little to do with Galton's original observations in the field of genetics."}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": ", which could be close to infinity. Moreover, even when you apply the Rejection sampling method, it is always hard to optimize the bound"}]}, {"question": "What is single layer Perceptron and Multilayer Perceptron", "positive_ctxs": [{"text": "A Multi Layer Perceptron (MLP) contains one or more hidden layers (apart from one input and one output layer). While a single layer perceptron can only learn linear functions, a multi layer perceptron can also learn non \u2013 linear functions. Figure 4 shows a multi layer perceptron with a single hidden layer."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The Voted Perceptron (Freund and Schapire, 1999), is a variant using multiple weighted perceptrons. The algorithm starts a new perceptron every time an example is wrongly classified, initializing the weights vector with the final weights of the last perceptron. Each perceptron will also be given another weight corresponding to how many examples do they correctly classify before wrongly classifying one, and at the end the output will be a weighted vote on all perceptrons."}, {"text": "The Voted Perceptron (Freund and Schapire, 1999), is a variant using multiple weighted perceptrons. The algorithm starts a new perceptron every time an example is wrongly classified, initializing the weights vector with the final weights of the last perceptron. Each perceptron will also be given another weight corresponding to how many examples do they correctly classify before wrongly classifying one, and at the end the output will be a weighted vote on all perceptrons."}, {"text": "The kernel perceptron algorithm was already introduced in 1964 by Aizerman et al. Margin bounds guarantees were given for the Perceptron algorithm in the general non-separable case first by Freund and Schapire (1998), and more recently by Mohri and Rostamizadeh (2013) who extend previous results and give new L1 bounds.The perceptron is a simplified model of a biological neuron. While the complexity of biological neuron models is often required to fully understand neural behavior, research suggests a perceptron-like linear model can produce some behavior seen in real neurons."}, {"text": "The kernel perceptron algorithm was already introduced in 1964 by Aizerman et al. Margin bounds guarantees were given for the Perceptron algorithm in the general non-separable case first by Freund and Schapire (1998), and more recently by Mohri and Rostamizadeh (2013) who extend previous results and give new L1 bounds.The perceptron is a simplified model of a biological neuron. While the complexity of biological neuron models is often required to fully understand neural behavior, research suggests a perceptron-like linear model can produce some behavior seen in real neurons."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "A multilayer perceptron (MLP) is a class of feedforward artificial neural network (ANN). The term MLP is used ambiguously, sometimes loosely to any feedforward ANN, sometimes strictly to refer to networks composed of multiple layers of perceptrons (with threshold activation); see \u00a7 Terminology. Multilayer perceptrons are sometimes colloquially referred to as \"vanilla\" neural networks, especially when they have a single hidden layer.An MLP consists of at least three layers of nodes: an input layer, a hidden layer and an output layer."}, {"text": "A multilayer perceptron (MLP) is a class of feedforward artificial neural network (ANN). The term MLP is used ambiguously, sometimes loosely to any feedforward ANN, sometimes strictly to refer to networks composed of multiple layers of perceptrons (with threshold activation); see \u00a7 Terminology. Multilayer perceptrons are sometimes colloquially referred to as \"vanilla\" neural networks, especially when they have a single hidden layer.An MLP consists of at least three layers of nodes: an input layer, a hidden layer and an output layer."}]}, {"question": "What are some examples of continuous distribution probability", "positive_ctxs": [{"text": "A continuous distribution has a range of values that are infinite, and therefore uncountable. For example, time is infinite: you could count from 0 seconds to a billion seconds\u2026a trillion seconds\u2026and so on, forever."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A continuous probability distribution is a probability distribution whose support is an uncountable set, such as an interval in the real line. They are uniquely characterized by a cumulative distribution function that can be used to calculate the probability for each subset of the support. There are many examples of continuous probability distributions: normal, uniform, chi-squared, and others."}, {"text": "A continuous probability distribution is a probability distribution whose support is an uncountable set, such as an interval in the real line. They are uniquely characterized by a cumulative distribution function that can be used to calculate the probability for each subset of the support. There are many examples of continuous probability distributions: normal, uniform, chi-squared, and others."}, {"text": "A continuous probability distribution is a probability distribution whose support is an uncountable set, such as an interval in the real line. They are uniquely characterized by a cumulative distribution function that can be used to calculate the probability for each subset of the support. There are many examples of continuous probability distributions: normal, uniform, chi-squared, and others."}, {"text": "A continuous probability distribution is a probability distribution whose support is an uncountable set, such as an interval in the real line. They are uniquely characterized by a cumulative distribution function that can be used to calculate the probability for each subset of the support. There are many examples of continuous probability distributions: normal, uniform, chi-squared, and others."}, {"text": "An analogous formula applies to the case of a continuous probability distribution. Not every probability distribution has a defined mean (see the Cauchy distribution for an example). Moreover, the mean can be infinite for some distributions."}, {"text": "An analogous formula applies to the case of a continuous probability distribution. Not every probability distribution has a defined mean (see the Cauchy distribution for an example). Moreover, the mean can be infinite for some distributions."}, {"text": "An analogous formula applies to the case of a continuous probability distribution. Not every probability distribution has a defined mean (see the Cauchy distribution for an example). Moreover, the mean can be infinite for some distributions."}]}, {"question": "What is the importance of binomial distribution", "positive_ctxs": [{"text": "The binomial distribution model allows us to compute the probability of observing a specified number of \"successes\" when the process is repeated a specific number of times (e.g., in a set of patients) and the outcome for a given patient is either a success or a failure."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The distribution of N thus is the binomial distribution with parameters n and p, where p = 1/2. The mean of the binomial distribution is n/2, and the variance is n/4. This distribution function will be denoted by N(d)."}, {"text": "The fundamental issue in implementing importance sampling simulation is the choice of the biased distribution which encourages the important regions of the input variables. Choosing or designing a good biased distribution is the \"art\" of importance sampling. The rewards for a good distribution can be huge run-time savings; the penalty for a bad distribution can be longer run times than for a general Monte Carlo simulation without importance sampling."}, {"text": "Because of this, the negative binomial distribution is also known as the gamma\u2013Poisson (mixture) distribution. The negative binomial distribution was originally derived as a limiting case of the gamma-Poisson distribution."}, {"text": "Because of this, the negative binomial distribution is also known as the gamma\u2013Poisson (mixture) distribution. The negative binomial distribution was originally derived as a limiting case of the gamma-Poisson distribution."}, {"text": "The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than n, the binomial distribution remains a good approximation, and is widely used."}, {"text": "The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than n, the binomial distribution remains a good approximation, and is widely used."}, {"text": "The binomial distribution and beta distribution are different views of the same model of repeated Bernoulli trials. The binomial distribution is the PMF of k successes given n independent events each with a probability p of success."}]}, {"question": "What is a coefficient in a regression model", "positive_ctxs": [{"text": "Regression coefficients represent the mean change in the response variable for one unit of change in the predictor variable while holding other predictors in the model constant. The coefficient indicates that for every additional meter in height you can expect weight to increase by an average of 106.5 kilograms."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The model got its name (spike-and-slab) due to the shape of the two prior distributions. The \"spike\" is the probability of a particular coefficient in the model to be zero. The \"slab\" is the prior distribution for the regression coefficient values."}, {"text": "In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient \u2013 the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t test."}, {"text": "In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient \u2013 the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t test."}, {"text": "In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient \u2013 the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t test."}, {"text": "In statistics, multicollinearity (also collinearity) is a phenomenon in which one predictor variable in a multiple regression model can be linearly predicted from the others with a substantial degree of accuracy. In this situation, the coefficient estimates of the multiple regression may change erratically in response to small changes in the model or the data. Multicollinearity does not reduce the predictive power or reliability of the model as a whole, at least within the sample data set; it only affects calculations regarding individual predictors."}, {"text": "The coefficient of partial determination can be defined as the proportion of variation that cannot be explained in a reduced model, but can be explained by the predictors specified in a full(er) model. This coefficient is used to provide insight into whether or not one or more additional predictors may be useful in a more fully specified regression model."}, {"text": "Logistic regression is a statistical model that in its basic form uses a logistic function to model a binary dependent variable, although many more complex extensions exist. In regression analysis, logistic regression (or logit regression) is estimating the parameters of a logistic model (a form of binary regression). Mathematically, a binary logistic model has a dependent variable with two possible values, such as pass/fail which is represented by an indicator variable, where the two values are labeled \"0\" and \"1\"."}]}, {"question": "How do you predict a value in linear regression in R", "positive_ctxs": [{"text": "2:107:35Suggested clip \u00b7 110 secondsLinear Regression R Program Make Predictions - YouTubeYouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Random forests can be used to rank the importance of variables in a regression or classification problem in a natural way. The following technique was described in Breiman's original paper and is implemented in the R package randomForest.The first step in measuring the variable importance in a data set"}, {"text": "Data are in the R package ISwR. The Cox proportional hazards regression using R gives the results shown in the box."}, {"text": "Data are in the R package ISwR. The Cox proportional hazards regression using R gives the results shown in the box."}, {"text": "In other words, a simple linear regression model might, for example, predict that a given randomly sampled person in Seattle would have an average yearly income $10,000 higher than a similar person in Mobile, Alabama. However, it would also predict, for example, that a white person might have an average income $7,000 above a black person, and a 65-year-old might have an income $3,000 below a 45-year-old, in both cases regardless of location. A multilevel model, however, would allow for different regression coefficients for each predictor in each location."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "is a linear combination of the parameters (but need not be linear in the independent variables). For example, in simple linear regression for modeling"}, {"text": "is a linear combination of the parameters (but need not be linear in the independent variables). For example, in simple linear regression for modeling"}]}, {"question": "What is the difference between Bayes rule and conditional probability", "positive_ctxs": [{"text": "The nominator is the joint probability and the denominator is the probability of the given outcome. This is the conditional probability: P(A\u2223B)=P(A\u2229B)P(B) This is the Bayes' rule: P(A\u2223B)=P(B|A)\u2217P(A)P(B)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Then why is the notion of generalized Bayes rule an improvement? It is indeed equivalent to the notion of Bayes rule when a Bayes rule exists and all"}, {"text": "In Bayes linear statistics, the probability model is only partially specified, and it is not possible to calculate conditional probability by Bayes' rule. Instead Bayes linear suggests the calculation of an Adjusted Expectation."}, {"text": "If a Bayes rule is unique then it is admissible. For example, as stated above, under mean squared error (MSE) the Bayes rule is unique and therefore admissible."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "In probability theory, the chain rule (also called the general product rule) permits the calculation of any member of the joint distribution of a set of random variables using only conditional probabilities. The rule is useful in the study of Bayesian networks, which describe a probability distribution in terms of conditional probabilities."}, {"text": "The power of the test is the probability that the test will find a statistically significant difference between men and women, as a function of the size of the true difference between those two populations."}]}, {"question": "What does Fourier transform represent", "positive_ctxs": [{"text": "The Fourier transform of a function of time is a complex-valued function of frequency, whose magnitude (absolute value) represents the amount of that frequency present in the original function, and whose argument is the phase offset of the basic sinusoid in that frequency."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Let X(f) be the Fourier transform of any function, x(t), whose samples at some interval, T, equal the x[n] sequence. Then the discrete-time Fourier transform (DTFT) is a Fourier series representation of a periodic summation of X(f):"}, {"text": "; however, for many signals of interest the Fourier transform does not formally exist. Regardless, Parseval's Theorem tells us that we can re-write the average power as follows."}, {"text": "This is G, since the Fourier transform of this integral is easy. Each fixed \u03c4 contribution is a Gaussian in x, whose Fourier transform is another Gaussian of reciprocal width in k."}, {"text": "Discrete-time Fourier transform (DTFT): Equivalent to the Fourier transform of a \"continuous\" function that is constructed from the discrete input function by using the sample values to modulate a Dirac comb. When the sample values are derived by sampling a function on the real line, \u0192(x), the DTFT is equivalent to a periodic summation of the Fourier transform of \u0192. The DTFT output is always periodic (cyclic)."}, {"text": "The spectral density function is the Fourier transform of the autocovariance function. In discrete terms this will be the discrete-time Fourier transform:"}, {"text": "The spectral density function is the Fourier transform of the autocovariance function. In discrete terms this will be the discrete-time Fourier transform:"}, {"text": "Discrete Fourier transform (general).The use of all of these transforms is greatly facilitated by the existence of efficient algorithms based on a fast Fourier transform (FFT). The Nyquist\u2013Shannon sampling theorem is critical for understanding the output of such discrete transforms."}]}, {"question": "What are the four assumptions of linear regression", "positive_ctxs": [{"text": "The Four Assumptions of Linear RegressionLinear relationship: There exists a linear relationship between the independent variable, x, and the dependent variable, y.Independence: The residuals are independent. Homoscedasticity: The residuals have constant variance at every level of x.Normality: The residuals of the model are normally distributed."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "There are several key assumptions that underlie the use of ANCOVA and affect interpretation of the results. The standard linear regression assumptions hold; further we assume that the slope of the covariate is equal across all treatment groups (homogeneity of regression slopes)."}, {"text": "There are several key assumptions that underlie the use of ANCOVA and affect interpretation of the results. The standard linear regression assumptions hold; further we assume that the slope of the covariate is equal across all treatment groups (homogeneity of regression slopes)."}, {"text": "The Gauss\u2013Markov theorem states that regression models which fulfill the classical linear regression model assumptions provide the most efficient, linear and unbiased estimators. In ordinary least squares, the relevant assumption of the classical linear regression model is that the error term is uncorrelated with the regressors."}, {"text": "Like other forms of regression analysis, logistic regression makes use of one or more predictor variables that may be either continuous or categorical. Unlike ordinary linear regression, however, logistic regression is used for predicting dependent variables that take membership in one of a limited number of categories (treating the dependent variable in the binomial case as the outcome of a Bernoulli trial) rather than a continuous outcome. Given this difference, the assumptions of linear regression are violated."}, {"text": "Like other forms of regression analysis, logistic regression makes use of one or more predictor variables that may be either continuous or categorical. Unlike ordinary linear regression, however, logistic regression is used for predicting dependent variables that take membership in one of a limited number of categories (treating the dependent variable in the binomial case as the outcome of a Bernoulli trial) rather than a continuous outcome. Given this difference, the assumptions of linear regression are violated."}, {"text": "Like other forms of regression analysis, logistic regression makes use of one or more predictor variables that may be either continuous or categorical. Unlike ordinary linear regression, however, logistic regression is used for predicting dependent variables that take membership in one of a limited number of categories (treating the dependent variable in the binomial case as the outcome of a Bernoulli trial) rather than a continuous outcome. Given this difference, the assumptions of linear regression are violated."}, {"text": "Logistic regression is an alternative to Fisher's 1936 method, linear discriminant analysis. If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis."}]}, {"question": "What is the intuition behind the logarithm", "positive_ctxs": [{"text": "The logarithm is to exponentiation as division is to multiplication: The logarithm is the inverse of the exponent: it undoes exponentiation. When studying logarithms, always remember the following fundamental equivalence: if and only if . Whenever one of these is true, so is the other."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The basic intuition behind gradient descent can be illustrated by a hypothetical scenario. A person is stuck in the mountains and is trying to get down (i.e. trying to find the global minimum)."}, {"text": "What is the probability of winning the car given the player has picked door 1 and the host has opened door 3?The answer to the first question is 2/3, as is correctly shown by the \"simple\" solutions. But the answer to the second question is now different: the conditional probability the car is behind door 1 or door 2 given the host has opened door 3 (the door on the right) is 1/2. This is because Monty's preference for rightmost doors means that he opens door 3 if the car is behind door 1 (which it is originally with probability 1/3) or if the car is behind door 2 (also originally with probability 1/3)."}, {"text": "The logarithm of a product is the sum of the logarithms of the numbers being multiplied; the logarithm of the ratio of two numbers is the difference of the logarithms. The logarithm of the p-th power of a number is p times the logarithm of the number itself; the logarithm of a p-th root is the logarithm of the number divided by p. The following table lists these identities with examples. Each of the identities can be derived after substitution of the logarithm definitions"}, {"text": "The intuition behind the CDF-based approach is that bounds on the CDF of a distribution can be translated into bounds on statistical functionals of that distribution. Given an upper and lower bound on the CDF, the approach involves finding the CDFs within the bounds that maximize and minimize the statistical functional of interest."}, {"text": "In manifold learning, the input data is assumed to be sampled from a low dimensional manifold that is embedded inside of a higher-dimensional vector space. The main intuition behind MVU is to exploit the local linearity of manifolds and create a mapping that preserves local neighbourhoods at every point of the underlying manifold."}, {"text": "In the same way as the logarithm reverses exponentiation, the complex logarithm is the inverse function of the exponential function, whether applied to real numbers or complex numbers. The modular discrete logarithm is another variant; it has uses in public-key cryptography."}, {"text": "If the car is behind door 1 the host can open either door 2 or door 3, so the probability the car is behind door 1 AND the host opens door 3 is 1/3 \u00d7 1/2 = 1/6. If the car is behind door 2 (and the player has picked door 1) the host must open door 3, so the probability the car is behind door 2 AND the host opens door 3 is 1/3 \u00d7 1 = 1/3. These are the only cases where the host opens door 3, so if the player has picked door 1 and the host opens door 3 the car is twice as likely to be behind door 2."}]}, {"question": "Is batch normalization used in inference", "positive_ctxs": [{"text": "Batch Normalization during inference During testing or inference phase we can't apply the same batch-normalization as we did during training because we might pass only sample at a time so it doesn't make sense to find mean and variance on a single sample."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Recently, some scholars have argued that batch normalization does not reduce internal covariate shift, but rather smooths the objective function, which in turn improves the performance. However, at initialization, batch normalization in fact induces severe gradient explosion in deep networks, which is only alleviated by skip connections in residual networks. Others sustain that batch normalization achieves length-direction decoupling, and thereby accelerates neural networks.After batch norm, many other in-layer normalization methods have been introduced, such as instance normalization, layer normalization, group normalization."}, {"text": "Besides analyzing this correlation experimentally, theoretical analysis is also provided for verification that batch normalization could result in a smoother landscape. Consider two identical networks, one contains batch normalization layers and the other doesn't, the behaviors of these two networks are then compared. Denote the loss functions as"}, {"text": "Batch normalization (also known as batch norm) is a method used to make artificial neural networks faster and more stable through normalization of the input layer by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015.While the effect of batch normalization is evident, the reasons behind its effectiveness remain under discussion. It was believed that it can mitigate the problem of internal covariate shift, where parameter initialization and changes in the distribution of the inputs of each layer affect the learning rate of the network."}, {"text": "In a neural network, batch normalization is achieved through a normalization step that fixes the means and variances of each layer's inputs. Ideally, the normalization would be conducted over the entire training set, but to use this step jointly with stochastic optimization methods, it is impractical to use the global information. Thus, normalization is restrained to each mini-batch in the training process."}, {"text": "The correlation between the gradients are computed for four models: a standard VGG network, a VGG network with batch normalization layers, a 25-layer deep linear network (DLN) trained with full-batch gradient descent, and a DLN network with batch normalization layers. Interestingly, it is shown that the standard VGG and DLN models both have higher correlations of gradients compared with their counterparts, indicating that the additional batch normalization layers are not reducing internal covariate shift."}, {"text": "is in the direction towards the minimum of the loss. It could thus be concluded from this inequality that the gradient generally becomes more predictive with the batch normalization layer."}, {"text": "The correlation between batch normalization and internal covariate shift is widely accepted but was not supported by experimental results. Scholars recently show with experiments that the hypothesized relationship is not an accurate one. Rather, the enhanced accuracy with the batch normalization layer seems to be independent of internal covariate shift."}]}, {"question": "How do you find the distribution function of a random variable", "positive_ctxs": [{"text": "(1 p)xp = (1 p)a+1p + \u00b7\u00b7\u00b7 + (1 p)bp = (1 p)a+1p (1 p)b+1p 1 (1 p) = (1 p)a+1 (1 p)b+1 We can take a = 0 to find the distribution function for a geometric random variable. The initial d indicates density and p indicates the probability from the distribution function."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A statistic is a random variable that is a function of the random sample, but not a function of unknown parameters. The probability distribution of the statistic, though, may have unknown parameters."}, {"text": "A statistic is a random variable that is a function of the random sample, but not a function of unknown parameters. The probability distribution of the statistic, though, may have unknown parameters."}, {"text": "A statistic is a random variable that is a function of the random sample, but not a function of unknown parameters. The probability distribution of the statistic, though, may have unknown parameters."}, {"text": "A statistic is a random variable that is a function of the random sample, but not a function of unknown parameters. The probability distribution of the statistic, though, may have unknown parameters."}, {"text": "A statistic is a random variable that is a function of the random sample, but not a function of unknown parameters. The probability distribution of the statistic, though, may have unknown parameters."}, {"text": "is countable, the random variable is called a discrete random variable and its distribution is a discrete probability distribution, i.e. can be described by a probability mass function that assigns a probability to each value in the image of"}, {"text": "is countable, the random variable is called a discrete random variable and its distribution is a discrete probability distribution, i.e. can be described by a probability mass function that assigns a probability to each value in the image of"}]}, {"question": "How do you test for convergence and divergence in a series", "positive_ctxs": [{"text": "If the limit of |a[n+1]/a[n]| is less than 1, then the series (absolutely) converges. If the limit is larger than one, or infinite, then the series diverges."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Theoretically EM is a first-order algorithm and as such converges slowly to a fixed-point solution. Redner and Walker (1984) make this point arguing in favour of superlinear and second order Newton and quasi-Newton methods and reporting slow convergence in EM on the basis of their empirical tests. They do concede that convergence in likelihood was rapid even if convergence in the parameter values themselves was not."}, {"text": "Theoretically EM is a first-order algorithm and as such converges slowly to a fixed-point solution. Redner and Walker (1984) make this point arguing in favour of superlinear and second order Newton and quasi-Newton methods and reporting slow convergence in EM on the basis of their empirical tests. They do concede that convergence in likelihood was rapid even if convergence in the parameter values themselves was not."}, {"text": "Theoretically EM is a first-order algorithm and as such converges slowly to a fixed-point solution. Redner and Walker (1984) make this point arguing in favour of superlinear and second order Newton and quasi-Newton methods and reporting slow convergence in EM on the basis of their empirical tests. They do concede that convergence in likelihood was rapid even if convergence in the parameter values themselves was not."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Campbell and Fiske (1959) developed the Multitrait-Multimethod Matrix to assess the construct validity of a set of measures in a study. The approach stresses the importance of using both discriminant and convergent validation techniques when assessing new tests. In other words, in order to establish construct validity, you have to demonstrate both convergence and discrimination."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "What is visualization in machine learning", "positive_ctxs": [{"text": "Data visualization is a technique that uses an array of static and interactive visuals within a specific context to help people understand and make sense of large amounts of data. The data is often displayed in a story format that visualizes patterns, trends and correlations that may otherwise go unnoticed."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Humans often have difficulty comprehending data in many dimensions. Thus, reducing data to a small number of dimensions is useful for visualization purposes."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "Does NLP use deep learning", "positive_ctxs": [{"text": "Deep Learning is extensively used for Predictive Analytics, NLP, Computer Vision, and Object Recognition."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level. Beyond semantic NLP, the ultimate goal of \"narrative\" NLP is to embody a full understanding of commonsense reasoning. By 2019, transformer-based deep learning architectures could generate coherent text."}, {"text": "Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level. Beyond semantic NLP, the ultimate goal of \"narrative\" NLP is to embody a full understanding of commonsense reasoning. By 2019, transformer-based deep learning architectures could generate coherent text."}, {"text": "Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level. Beyond semantic NLP, the ultimate goal of \"narrative\" NLP is to embody a full understanding of commonsense reasoning. By 2019, transformer-based deep learning architectures could generate coherent text."}, {"text": "A form of computer technology \u2013 computers and their application. NLP makes use of computers, image scanners, microphones, and many types of software programs."}, {"text": "Neuroevolution is commonly used as part of the reinforcement learning paradigm, and it can be contrasted with conventional deep learning techniques that use gradient descent on a neural network with a fixed topology."}, {"text": "In some areas, this shift has entailed substantial changes in how NLP systems are designed, such that deep neural network-based approaches may be viewed as a new paradigm distinct from statistical natural language processing. For instance, the term neural machine translation (NMT) emphasizes the fact that deep learning-based approaches to machine translation directly learn sequence-to-sequence transformations, obviating the need for intermediate steps such as word alignment and language modeling that was used in statistical machine translation (SMT). Latest works tend to use non-technical structure of a given task to build proper neural network."}, {"text": "In some areas, this shift has entailed substantial changes in how NLP systems are designed, such that deep neural network-based approaches may be viewed as a new paradigm distinct from statistical natural language processing. For instance, the term neural machine translation (NMT) emphasizes the fact that deep learning-based approaches to machine translation directly learn sequence-to-sequence transformations, obviating the need for intermediate steps such as word alignment and language modeling that was used in statistical machine translation (SMT). Latest works tend to use non-technical structure of a given task to build proper neural network."}]}, {"question": "How do you identify a random variable", "positive_ctxs": [{"text": "If you see a lowercase x or y, that's the kind of variable you're used to in algebra. It refers to an unknown quantity or quantities. If you see an uppercase X or Y, that's a random variable and it usually refers to the probability of getting a certain outcome."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Suppose the police officers then stop a driver at random to administer a breathalyzer test. It indicates that the driver is drunk. We assume you do not know anything else about them."}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}]}, {"question": "How is gradient descent used in machine learning", "positive_ctxs": [{"text": "Gradient descent is an optimization algorithm that's used when training a machine learning model. It's based on a convex function and tweaks its parameters iteratively to minimize a given function to its local minimum."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is used in Geophysics, specifically in applications of Full-Waveform Inversion (FWI).Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks.Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for estimating linear regression models, originally under the name ADALINE."}, {"text": "It is used in Geophysics, specifically in applications of Full-Waveform Inversion (FWI).Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks.Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for estimating linear regression models, originally under the name ADALINE."}, {"text": "It is used in Geophysics, specifically in applications of Full-Waveform Inversion (FWI).Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks.Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for estimating linear regression models, originally under the name ADALINE."}, {"text": "It is used in Geophysics, specifically in applications of Full-Waveform Inversion (FWI).Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks.Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for estimating linear regression models, originally under the name ADALINE."}, {"text": "It is used in Geophysics, specifically in applications of Full-Waveform Inversion (FWI).Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks.Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for estimating linear regression models, originally under the name ADALINE."}, {"text": "It is used in Geophysics, specifically in applications of Full-Waveform Inversion (FWI).Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks.Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for estimating linear regression models, originally under the name ADALINE."}, {"text": "It is used in Geophysics, specifically in applications of Full-Waveform Inversion (FWI).Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the backpropagation algorithm, it is the de facto standard algorithm for training artificial neural networks.Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for estimating linear regression models, originally under the name ADALINE."}]}, {"question": "What is part of speech tagging in NLP", "positive_ctxs": [{"text": "It is a process of converting a sentence to forms \u2013 list of words, list of tuples (where each tuple is having a form (word, tag)). The tag in case of is a part-of-speech tag, and signifies whether the word is a noun, adjective, verb, and so on."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Although the identification of parts of speech turned out not to be very useful for speech recognition, tagging methods developed during these projects are now used in various NLP applications.The incremental research techniques developed at IBM eventually became dominant in the field after DARPA, in the mid-80s, returned to NLP research and imposed that methodology to participating teams, shared common goals, data, and precise evaluation metrics. The Continuous Speech Recognition Group's research, which required large amounts of data to train the algorithms, eventually led to the creation of the Linguistic Data Consortium. In the 1980s, although the broader problem of speech recognition remained unsolved, they sought to apply the methods developed to other problems; machine translation and stock value prediction were both seen as options."}, {"text": "For some time, part-of-speech tagging was considered an inseparable part of natural language processing, because there are certain cases where the correct part of speech cannot be decided without understanding the semantics or even the pragmatics of the context. This is extremely expensive, especially because analyzing the higher levels is much harder when multiple part-of-speech possibilities must be considered for each word."}, {"text": "Part-of-speech tagging is harder than just having a list of words and their parts of speech, because some words can represent more than one part of speech at different times, and because some parts of speech are complex or unspoken. This is not rare\u2014in natural languages (as opposed to many artificial languages), a large percentage of word-forms are ambiguous. For example, even \"dogs\", which is usually thought of as just a plural noun, can also be a verb:"}, {"text": "For example, NN for singular common nouns, NNS for plural common nouns, NP for singular proper nouns (see the POS tags used in the Brown Corpus). Other tagging systems use a smaller number of tags and ignore fine differences or model them as features somewhat independent from part-of-speech.In part-of-speech tagging by computer, it is typical to distinguish from 50 to 150 separate parts of speech for English. Work on stochastic methods for tagging Koine Greek (DeRose 1990) has used over 1,000 parts of speech and found that about as many words were ambiguous in that language as in English."}, {"text": "Sequence tagging is a class of problems prevalent in natural language processing, where input data are often sequences (e.g. The sequence tagging problem appears in several guises, e.g. part-of-speech tagging and named entity recognition."}, {"text": "CLAWS pioneered the field of HMM-based part of speech tagging but were quite expensive since it enumerated all possibilities. It sometimes had to resort to backup methods when there were simply too many options (the Brown Corpus contains a case with 17 ambiguous words in a row, and there are words such as \"still\" that can represent as many as 7 distinct parts of speech (DeRose 1990, p. 82))."}, {"text": "In machine learning, sequence labeling is a type of pattern recognition task that involves the algorithmic assignment of a categorical label to each member of a sequence of observed values. A common example of a sequence labeling task is part of speech tagging, which seeks to assign a part of speech to each word in an input sentence or document. Sequence labeling can be treated as a set of independent classification tasks, one per member of the sequence."}]}, {"question": "What is structural equation modeling used for", "positive_ctxs": [{"text": "Structural equation modeling is a multivariate statistical analysis technique that is used to analyze structural relationships. This technique is the combination of factor analysis and multiple regression analysis, and it is used to analyze the structural relationship between measured variables and latent constructs."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Confirmatory factor analysis (CFA) is a more complex approach that tests the hypothesis that the items are associated with specific factors. CFA uses structural equation modeling to test a measurement model whereby loading on the factors allows for evaluation of relationships between observed variables and unobserved variables. Structural equation modeling approaches can accommodate measurement error, and are less restrictive than least-squares estimation."}, {"text": "It can be evaluated through different forms of factor analysis, structural equation modeling (SEM), and other statistical evaluations. It is important to note that a single study does not prove construct validity. Rather it is a continuous process of evaluation, reevaluation, refinement, and development."}, {"text": "It can be evaluated through different forms of factor analysis, structural equation modeling (SEM), and other statistical evaluations. It is important to note that a single study does not prove construct validity. Rather it is a continuous process of evaluation, reevaluation, refinement, and development."}, {"text": "In addition, by an adjustment PLS-PM is capable of consistently estimating certain parameters of common factor models as well, through an approach called consistent PLS (PLSc). A further related development is factor-based PLS-PM (PLSF), a variation of which employs PLSc as a basis for the estimation of the factors in common factor models; this method significantly increases the number of common factor model parameters that can be estimated, effectively bridging the gap between classic PLS and covariance\u2010based structural equation modeling. Furthermore, PLS-PM can be used for out-sample prediction purposes, and can be employed as an estimator in confirmatory composite analysis.The PLS structural equation model is composed of two sub-models: the measurement model and structural model."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "is a component-based estimation approach that differs from the covariance-based structural equation modeling. Unlike covariance-based approaches to structural equation modeling, PLS-PM does not fit a common factor model to the data, it rather fits a composite model. In doing so, it maximizes the amount of variance explained (though what this means from a statistical point of view is unclear and PLS-PM users do not agree on how this goal might be achieved)."}, {"text": "Structural equation modeling (SEM) includes a diverse set of mathematical models, computer algorithms, and statistical methods that fit networks of constructs to data. SEM includes confirmatory factor analysis, confirmatory composite analysis, path analysis, partial least squares path modeling, and latent growth modeling. The concept should not be confused with the related concept of structural models in econometrics, nor with structural models in economics."}]}, {"question": "What equations are used for Classificationion in a support vector machine", "positive_ctxs": [{"text": "(Note that how a support vector machine classifies points that fall on a boundary line is implementation dependent. In our discussions, we have said that points falling on the line will be considered negative examples, so the classification equation is w . u + b \u2264 0.)"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In machine learning, the radial basis function kernel, or RBF kernel, is a popular kernel function used in various kernelized learning algorithms. In particular, it is commonly used in support vector machine classification.The RBF kernel on two samples x and x', represented as feature vectors in some input space, is defined as"}, {"text": "In machine learning, Platt scaling or Platt calibration is a way of transforming the outputs of a classification model into a probability distribution over classes. The method was invented by John Platt in the context of support vector machines,"}, {"text": "Consider a binary classification problem with a dataset (x1, y1), ..., (xn, yn), where xi is an input vector and yi \u2208 {-1, +1} is a binary label corresponding to it. A soft-margin support vector machine is trained by solving a quadratic programming problem, which is expressed in the dual form as follows:"}, {"text": ", this approach defines a general class of algorithms named Tikhonov regularization. For instance, using the hinge loss leads to the support vector machine algorithm, and using the epsilon-insensitive loss leads to support vector regression."}, {"text": "MLPs were a popular machine learning solution in the 1980s, finding applications in diverse fields such as speech recognition, image recognition, and machine translation software, but thereafter faced strong competition from much simpler (and related) support vector machines. Interest in backpropagation networks returned due to the successes of deep learning."}, {"text": "MLPs were a popular machine learning solution in the 1980s, finding applications in diverse fields such as speech recognition, image recognition, and machine translation software, but thereafter faced strong competition from much simpler (and related) support vector machines. Interest in backpropagation networks returned due to the successes of deep learning."}, {"text": "MLPs were a popular machine learning solution in the 1980s, finding applications in diverse fields such as speech recognition, image recognition, and machine translation software, but thereafter faced strong competition from much simpler (and related) support vector machines. Interest in backpropagation networks returned due to the successes of deep learning."}]}, {"question": "What is the Matrix theory", "positive_ctxs": [{"text": "Matrix theory is a branch of mathematics which is focused on study of matrices. Initially, it was a sub-branch of linear algebra, but soon it grew to cover subjects related to graph theory, algebra, combinatorics and statistics as well."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Matrix multiplication is defined in such a way that the product of two matrices is the matrix of the composition of the corresponding linear maps, and the product of a matrix and a column matrix is the column matrix representing the result of applying the represented linear map to the represented vector. It follows that the theory of finite-dimensional vector spaces and the theory of matrices are two different languages for expressing exactly the same concepts."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Matrix calculus is used for deriving optimal stochastic estimators, often involving the use of Lagrange multipliers. This includes the derivation of:"}, {"text": "What emerges then is that info-gap theory is yet to explain in what way, if any, it actually attempts to deal with the severity of the uncertainty under consideration. Subsequent sections of this article will address this severity issue and its methodological and practical implications."}, {"text": "An algebraic formulation of the above can be obtained by using the min-plus algebra. Matrix multiplication in this system is defined as follows: Given two"}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}]}, {"question": "What does activation function do in neural network", "positive_ctxs": [{"text": "In artificial neural networks, the activation function of a node defines the output of that node given an input or set of inputs. A standard integrated circuit can be seen as a digital network of activation functions that can be \"ON\" (1) or \"OFF\" (0), depending on input."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "When the activation function is non-linear, then a two-layer neural network can be proven to be a universal function approximator. This is known as the Universal Approximation Theorem. The identity activation function does not satisfy this property."}, {"text": "When activation functions have this property, the neural network will learn efficiently when its weights are initialized with small random values. When the activation function does not approximate identity near the origin, special care must be used when initializing the weights. In the table below, activation functions where"}, {"text": "The most basic model of a neuron consists of an input with some synaptic weight vector and an activation function or transfer function inside the neuron determining output. This is the basic structure used for artificial neurons, which in a neural network often looks like"}, {"text": "The softmax function, also known as softargmax or normalized exponential function, is a generalization of the logistic function to multiple dimensions. It is used in multinomial logistic regression and is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom."}, {"text": "The softmax function, also known as softargmax or normalized exponential function, is a generalization of the logistic function to multiple dimensions. It is used in multinomial logistic regression and is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom."}, {"text": "is the weighted sum of the input connections. Alternative activation functions have been proposed, including the rectifier and softplus functions. More specialized activation functions include radial basis functions (used in radial basis networks, another class of supervised neural network models)."}, {"text": "is the weighted sum of the input connections. Alternative activation functions have been proposed, including the rectifier and softplus functions. More specialized activation functions include radial basis functions (used in radial basis networks, another class of supervised neural network models)."}]}, {"question": "Is variance and standard error the same", "positive_ctxs": [{"text": "While the variance and the standard error of the mean are different estimates of variability, one can be derived from the other. Multiply the standard error of the mean by itself to square it. This step assumes that the standard error is a known quantity."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, the reduced chi-square statistic is used extensively in goodness of fit testing. It is also known as mean squared weighted deviation (MSWD) in isotopic dating and variance of unit weight in the context of weighted least squares.Its square root is called regression standard error, standard error of the regression, or standard error of the equation"}, {"text": "The reliability of the sample mean is estimated using the standard error, which in turn is calculated using the variance of the sample. If the sample is random, the standard error falls with the size of the sample and the sample mean's distribution approaches the normal distribution as the sample size increases."}, {"text": "standard error of that quantity. For the case where the statistic is the sample mean, and samples are uncorrelated, the standard error is:"}, {"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "The standard deviation of a population or sample and the standard error of a statistic (e.g., of the sample mean) are quite different, but related. The sample mean's standard error is the standard deviation of the set of means that would be found by drawing an infinite number of repeated samples from the population and computing a mean for each sample. The mean's standard error turns out to equal the population standard deviation divided by the square root of the sample size, and is estimated by using the sample standard deviation divided by the square root of the sample size."}]}, {"question": "What is variable screening", "positive_ctxs": [{"text": "Variable screening is the process of filtering out irrelevant variables, with the aim to reduce the dimensionality from ultrahigh to high while retaining all important variables. The main theme of this thesis is to develop variable screening and variable selection methods for high dimensional data analysis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "For example, if 1,339 women age 50\u201359 have to be invited for breast cancer screening over a ten-year period in order to prevent one woman from dying of breast cancer, then the NNT for being invited to breast cancer screening is 1339."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}]}, {"question": "What is the role of probability to statistic", "positive_ctxs": [{"text": "Probability Role of probability in statistics: Use probability to predict results of experiment under assumptions. Compute probability of error larger than given amount. Compute probability of given departure between prediction and results under assumption."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, a statistic is sufficient with respect to a statistical model and its associated unknown parameter if \"no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter\". In particular, a statistic is sufficient for a family of probability distributions if the sample from which it is calculated gives no additional information than the statistic, as to which of those probability distributions is the sampling distribution."}, {"text": "Although several statistical packages (e.g., SPSS, SAS) report the Wald statistic to assess the contribution of individual predictors, the Wald statistic has limitations. When the regression coefficient is large, the standard error of the regression coefficient also tends to be larger increasing the probability of Type-II error. The Wald statistic also tends to be biased when data are sparse."}, {"text": "Although several statistical packages (e.g., SPSS, SAS) report the Wald statistic to assess the contribution of individual predictors, the Wald statistic has limitations. When the regression coefficient is large, the standard error of the regression coefficient also tends to be larger increasing the probability of Type-II error. The Wald statistic also tends to be biased when data are sparse."}, {"text": "Although several statistical packages (e.g., SPSS, SAS) report the Wald statistic to assess the contribution of individual predictors, the Wald statistic has limitations. When the regression coefficient is large, the standard error of the regression coefficient also tends to be larger increasing the probability of Type-II error. The Wald statistic also tends to be biased when data are sparse."}, {"text": "which means, if the true speed of a vehicle is 125, the drive has the probability of 0.36% to avoid the fine when the statistic is performed at level 125 since the recorded average speed is lower than 121.9. If the true speed is closer to 121.9 than 125, then the probability of avoiding the fine will also be higher."}, {"text": "which means, if the true speed of a vehicle is 125, the drive has the probability of 0.36% to avoid the fine when the statistic is performed at level 125 since the recorded average speed is lower than 121.9. If the true speed is closer to 121.9 than 125, then the probability of avoiding the fine will also be higher."}, {"text": "which means, if the true speed of a vehicle is 125, the drive has the probability of 0.36% to avoid the fine when the statistic is performed at level 125 since the recorded average speed is lower than 121.9. If the true speed is closer to 121.9 than 125, then the probability of avoiding the fine will also be higher."}]}, {"question": "What are statistics and probability", "positive_ctxs": [{"text": "Probability is the study of random events. It is used in analyzing games of chance, genetics, weather prediction, and a myriad of other everyday events. Statistics is the mathematics we use to collect, organize, and interpret numerical data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being distribution-free or having a specified distribution but with the distribution's parameters unspecified. Nonparametric statistics includes both descriptive statistics and statistical inference."}, {"text": "Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being distribution-free or having a specified distribution but with the distribution's parameters unspecified. Nonparametric statistics includes both descriptive statistics and statistical inference."}, {"text": "Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being distribution-free or having a specified distribution but with the distribution's parameters unspecified. Nonparametric statistics includes both descriptive statistics and statistical inference."}, {"text": "Now, assume (for example) that there are 5 green and 45 red marbles in the urn. Standing next to the urn, you close your eyes and draw 10 marbles without replacement. What is the probability that exactly 4 of the 10 are green?"}, {"text": "E2) A newlywed couple plans to have children, and will continue until the first girl. What is the probability that there are zero boys before the first girl, one boy before the first girl, two boys before the first girl, and so on?"}, {"text": "The earliest writings on probability and statistics, statistical methods drawing from probability theory, date back to Arab mathematicians and cryptographers, notably Al-Khalil (717\u2013786) and Al-Kindi (801\u2013873). In the 18th century, statistics also started to draw heavily from calculus. In more recent years statistics has relied more on statistical software."}]}, {"question": "What does probability density function represent", "positive_ctxs": [{"text": "Probability density function (PDF) is a statistical expression that defines a probability distribution (the likelihood of an outcome) for a discrete random variable (e.g., a stock or ETF) as opposed to a continuous random variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Note that this definition does not require X to have an absolutely continuous distribution (which has a probability density function \u0192), nor does it require a discrete one. In the former case, the inequalities can be upgraded to equality: a median satisfies"}, {"text": "It is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function, by using the Dirac delta function. (This is not possible with a probability density function in the sense defined above, it may be done with a distribution.) For example, consider a binary discrete random variable having the Rademacher distribution\u2014that is, taking \u22121 or 1 for values, with probability \u00bd each."}, {"text": "It is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function, by using the Dirac delta function. (This is not possible with a probability density function in the sense defined above, it may be done with a distribution.) For example, consider a binary discrete random variable having the Rademacher distribution\u2014that is, taking \u22121 or 1 for values, with probability \u00bd each."}, {"text": "It is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function, by using the Dirac delta function. (This is not possible with a probability density function in the sense defined above, it may be done with a distribution.) For example, consider a binary discrete random variable having the Rademacher distribution\u2014that is, taking \u22121 or 1 for values, with probability \u00bd each."}, {"text": "In density estimation, the unknown parameter is probability density itself. The loss function is typically chosen to be a norm in an appropriate function space. For example, for L2 norm,"}, {"text": "In density estimation, the unknown parameter is probability density itself. The loss function is typically chosen to be a norm in an appropriate function space. For example, for L2 norm,"}, {"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}]}, {"question": "How do you train a neural network in TensorFlow", "positive_ctxs": [{"text": "Train a neural network with TensorFlowStep 1: Import the data.Step 2: Transform the data.Step 3: Construct the tensor.Step 4: Build the model.Step 5: Train and evaluate the model.Step 6: Improve the model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "A fundamental objection is that ANNs do not sufficiently reflect neuronal function. Backpropagation is a critical step, although no such mechanism exists in biological neural networks. How information is coded by real neurons is not known."}, {"text": "A fundamental objection is that ANNs do not sufficiently reflect neuronal function. Backpropagation is a critical step, although no such mechanism exists in biological neural networks. How information is coded by real neurons is not known."}, {"text": "As an example, consider the process of boarding a train, in which the reward is measured by the negative of the total time spent boarding (alternatively, the cost of boarding the train is equal to the boarding time). One strategy is to enter the train door as soon as they open, minimizing the initial wait time for yourself. If the train is crowded, however, then you will have a slow entry after the initial action of entering the door as people are fighting you to depart the train as you attempt to board."}, {"text": "For many applications, the training data is less available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain."}, {"text": "For many applications, the training data is less available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain."}, {"text": "For many applications, the training data is less available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain."}]}, {"question": "What is the meaning of covariance in statistics", "positive_ctxs": [{"text": "Covariance: An Overview. Variance and covariance are mathematical terms frequently used in statistics and probability theory. Variance refers to the spread of a data set around its mean value, while a covariance refers to the measure of the directional relationship between two random variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In this way, an interpretation provides semantic meaning to the terms, the predicates, and formulas of the language. The study of the interpretations of formal languages is called formal semantics. What follows is a description of the standard or Tarskian semantics for first-order logic."}, {"text": "Brownian covariance is motivated by generalization of the notion of covariance to stochastic processes. The square of the covariance of random variables X and Y can be written in the following form:"}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}]}, {"question": "How do you find the random variable", "positive_ctxs": [{"text": "The Random Variable is X = \"The sum of the scores on the two dice\". Let's count how often each value occurs, and work out the probabilities: 2 occurs just once, so P(X = 2) = 1/36. 3 occurs twice, so P(X = 3) = 2/36 = 1/18."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "Suppose the police officers then stop a driver at random to administer a breathalyzer test. It indicates that the driver is drunk. We assume you do not know anything else about them."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Given a set of data that contains information on medical patients your goal is to find correlation for a disease. Before you can start iterating through the data ensure that you have an understanding of the result, are you looking for patients who have the disease? Are there other diseases that can be the cause?"}]}, {"question": "What is NMF machine learning", "positive_ctxs": [{"text": "NMF stands for non-negative matrix factorization, a technique for obtaining low rank representation of matrices with non-negative or positive elements. In information retrieval and text mining, we rely on term-document matrices for representing document collections."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Depending on how the NMF components are obtained, the imputation procedure with NMF can be composed of two steps. On one hand, when the NMF components are known, Ren et al. (2020) proved that impact from missing data during data imputation (\"target modeling\" in their study) is a second order effect."}, {"text": "Depending on how the NMF components are obtained, the imputation procedure with NMF can be composed of two steps. On one hand, when the NMF components are known, Ren et al. (2020) proved that impact from missing data during data imputation (\"target modeling\" in their study) is a second order effect."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "How do you determine data error", "positive_ctxs": [{"text": "Error -- subtract the theoretical value (usually the number the professor has as the target value) from your experimental data point. Percent error -- take the absolute value of the error divided by the theoretical value, then multiply by 100."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}]}, {"question": "What is a nonparametric test what is a parametric test", "positive_ctxs": [{"text": "In the literal meaning of the terms, a parametric statistical test is one that makes assumptions about the parameters (defining properties) of the population distribution(s) from which one's data are drawn, while a non-parametric test is one that makes no such assumptions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In addition, the concept of power is used to make comparisons between different statistical testing procedures: for example, between a parametric test and a nonparametric test of the same hypothesis."}, {"text": "A Wilcoxon signed-rank test is a nonparametric test that can be used to determine whether two dependent samples were selected from populations having the same distribution."}, {"text": "So when the result of a statistical analysis is said to be an \u201cexact test\u201d or an \u201cexact p-value\u201d, it ought to imply that the test is defined without parametric assumptions and evaluated without using approximate algorithms. In principle however it could also mean that a parametric test has been employed in a situation where all parametric assumptions are fully met, but it is in most cases impossible to prove this completely in a real world situation. Exceptions when it is certain that parametric tests are exact include tests based on the binomial or Poisson distributions."}, {"text": "Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation \u03c72 test of association, a permutation version of Aly's test for comparing variances and so on."}, {"text": "Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation \u03c72 test of association, a permutation version of Aly's test for comparing variances and so on."}, {"text": "Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation \u03c72 test of association, a permutation version of Aly's test for comparing variances and so on."}, {"text": "Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutation t-test, a permutation \u03c72 test of association, a permutation version of Aly's test for comparing variances and so on."}]}, {"question": "What is structural topic modeling", "positive_ctxs": [{"text": "The Structural Topic Model allows researchers to flexibly estimate a topic model that includes document-level metadata. The stm package provides many useful features, including rich ways to explore topics, estimate uncertainty, and visualize quantities of interest."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Latent variable models are statistical models where in addition to the observed variables, a set of latent variables also exists which is not observed. A highly practical example of latent variable models in machine learning is the topic modeling which is a statistical model for generating the words (observed variables) in the document based on the topic (latent variable) of the document. In the topic modeling, the words in the document are generated according to different statistical parameters when the topic of the document is changed."}, {"text": "Latent variable models are statistical models where in addition to the observed variables, a set of latent variables also exists which is not observed. A highly practical example of latent variable models in machine learning is the topic modeling which is a statistical model for generating the words (observed variables) in the document based on the topic (latent variable) of the document. In the topic modeling, the words in the document are generated according to different statistical parameters when the topic of the document is changed."}, {"text": "Latent variable models are statistical models where in addition to the observed variables, a set of latent variables also exists which is not observed. A highly practical example of latent variable models in machine learning is the topic modeling which is a statistical model for generating the words (observed variables) in the document based on the topic (latent variable) of the document. In the topic modeling, the words in the document are generated according to different statistical parameters when the topic of the document is changed."}, {"text": "jLDADMM A Java package for topic modeling on normal or short texts. jLDADMM includes implementations of the LDA topic model and the one-topic-per-document Dirichlet Multinomial Mixture model. jLDADMM also provides an implementation for document clustering evaluation to compare topic models."}, {"text": "It can be evaluated through different forms of factor analysis, structural equation modeling (SEM), and other statistical evaluations. It is important to note that a single study does not prove construct validity. Rather it is a continuous process of evaluation, reevaluation, refinement, and development."}, {"text": "It can be evaluated through different forms of factor analysis, structural equation modeling (SEM), and other statistical evaluations. It is important to note that a single study does not prove construct validity. Rather it is a continuous process of evaluation, reevaluation, refinement, and development."}]}, {"question": "How do you interpret Poisson regression results", "positive_ctxs": [{"text": "We can interpret the Poisson regression coefficient as follows: for a one unit change in the predictor variable, the difference in the logs of expected counts is expected to change by the respective regression coefficient, given the other predictor variables in the model are held constant."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "In statistics, Poisson regression is a generalized linear model form of regression analysis used to model count data and contingency tables. Poisson regression assumes the response variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. A Poisson regression model is sometimes known as a log-linear model, especially when used to model contingency tables."}, {"text": "Negative binomial regression is a popular generalization of Poisson regression because it loosens the highly restrictive assumption that the variance is equal to the mean made by the Poisson model. The traditional negative binomial regression model, commonly known as NB2, is based on the Poisson-gamma mixture distribution. This model is popular because it models the Poisson heterogeneity with a gamma distribution."}, {"text": "Another example of generalized linear models includes Poisson regression which models count data using the Poisson distribution. The link is typically the logarithm, the canonical link."}, {"text": "Another example of generalized linear models includes Poisson regression which models count data using the Poisson distribution. The link is typically the logarithm, the canonical link."}, {"text": "Then, in accordance with utility theory, we can then interpret the latent variables as expressing the utility that results from making each of the choices. We can also interpret the regression coefficients as indicating the strength that the associated factor (i.e. explanatory variable) has in contributing to the utility \u2014 or more correctly, the amount by which a unit change in an explanatory variable changes the utility of a given choice."}]}, {"question": "What is the difference between Anova and Manova", "positive_ctxs": [{"text": "The obvious difference between ANOVA and a \"Multivariate Analysis of Variance\" (MANOVA) is the \u201cM\u201d, which stands for multivariate. In basic terms, A MANOVA is an ANOVA with two or more continuous response variables. Like ANOVA, MANOVA has both a one-way flavor and a two-way flavor."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In psychophysical terms, the size difference between A and C is above the just noticeable difference ('jnd') while the size differences between A and B and B and C are below the jnd."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration."}, {"text": "the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration."}, {"text": "There is a simple difference formula to compute the rank-biserial correlation from the common language effect size: the correlation is the difference between the proportion of pairs favorable to the hypothesis (f) minus its complement (i.e. : the proportion that is unfavorable (u)). This simple difference formula is just the difference of the common language effect size of each group, and is as follows:"}, {"text": "There is a simple difference formula to compute the rank-biserial correlation from the common language effect size: the correlation is the difference between the proportion of pairs favorable to the hypothesis (f) minus its complement (i.e. : the proportion that is unfavorable (u)). This simple difference formula is just the difference of the common language effect size of each group, and is as follows:"}]}, {"question": "How are algorithms biased", "positive_ctxs": [{"text": "Humans are error-prone and biased, but that doesn't mean that algorithms are necessarily better. But these systems can be biased based on who builds them, how they're developed, and how they're ultimately used. This is commonly known as algorithmic bias."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "How much the premises support the conclusion depends upon (1) the number in the sample group, (2) the number in the population, and (3) the degree to which the sample represents the population (which may be achieved by taking a random sample). The hasty generalization and the biased sample are generalization fallacies."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Additionally, this evaluation is biased towards algorithms that use the same cluster model. For example, k-means clustering naturally optimizes object distances, and a distance-based internal criterion will likely overrate the resulting clustering."}, {"text": "Additionally, this evaluation is biased towards algorithms that use the same cluster model. For example, k-means clustering naturally optimizes object distances, and a distance-based internal criterion will likely overrate the resulting clustering."}, {"text": "How high is the probability they really are drunk?Many would answer as high as 95%, but the correct probability is about 2%."}, {"text": "All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as in unbiased estimation of standard deviation); because an estimator is median-unbiased but not mean-unbiased (or the reverse); because a biased estimator gives a lower value of some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful."}, {"text": "All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as in unbiased estimation of standard deviation); because an estimator is median-unbiased but not mean-unbiased (or the reverse); because a biased estimator gives a lower value of some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful."}]}, {"question": "What is supervised and unsupervised data", "positive_ctxs": [{"text": "In a supervised learning model, the algorithm learns on a labeled dataset, providing an answer key that the algorithm can use to evaluate its accuracy on training data. An unsupervised model, in contrast, provides unlabeled data that the algorithm tries to make sense of by extracting features and patterns on its own."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "The goals of learning are understanding and prediction. Learning falls into many categories, including supervised learning, unsupervised learning, online learning, and reinforcement learning. From the perspective of statistical learning theory, supervised learning is best understood."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "When data are unlabelled, supervised learning is not possible, and an unsupervised learning approach is required, which attempts to find natural clustering of the data to groups, and then map new data to these formed groups. The support-vector clustering algorithm, created by Hava Siegelmann and Vladimir Vapnik, applies the statistics of support vectors, developed in the support vector machines algorithm, to categorize unlabeled data, and is one of the most widely used clustering algorithms in industrial applications."}, {"text": "When data are unlabelled, supervised learning is not possible, and an unsupervised learning approach is required, which attempts to find natural clustering of the data to groups, and then map new data to these formed groups. The support-vector clustering algorithm, created by Hava Siegelmann and Vladimir Vapnik, applies the statistics of support vectors, developed in the support vector machines algorithm, to categorize unlabeled data, and is one of the most widely used clustering algorithms in industrial applications."}]}, {"question": "How can statistics be mislead", "positive_ctxs": [{"text": "Bad Sampling. The data can be misleading due to the sampling method used to obtain data. For instance, the size and the type of sample used in any statistics play a significant role \u2014 many polls and questionnaires target certain audiences that provide specific answers, resulting in small and biased sample sizes."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}]}, {"question": "How AI will change the future", "positive_ctxs": [{"text": "In the future, artificial intelligence (AI) is likely to substantially change both marketing strategies and customer behaviors. Finally, the authors suggest AI will be more effective if it augments (rather than replaces) human managers. AI is going to make our lives better in the future."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "How we measure the response affects what inferences we draw. Suppose that we measure changes in blood pressure as a percentage change rather than in absolute values. Then, depending in the exact numbers, the average causal effect might be an increase in blood pressure."}, {"text": "The Asilomar AI Principles, which contain only the principles agreed to by 90% of the attendees of the Future of Life Institute's Beneficial AI 2017 conference, agree in principle that \"There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities\" and \"Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.\" AI safety advocates such as Bostrom and Tegmark have criticized the mainstream media's use of \"those inane Terminator pictures\" to illustrate AI safety concerns: \"It can't be much fun to have aspersions cast on one's academic discipline, one's professional community, one's life work ... I call on all sides to practice patience and restraint, and to engage in direct dialogue and collaboration as much as possible."}, {"text": "After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor."}, {"text": "After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor."}, {"text": "After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor."}, {"text": "For financial statements audit, AI makes continuous audit possible. AI tools could analyze many sets of different information immediately. The potential benefit would be the overall audit risk will be reduced, the level of assurance will be increased and the time duration of audit will be reduced."}, {"text": "For financial statements audit, AI makes continuous audit possible. AI tools could analyze many sets of different information immediately. The potential benefit would be the overall audit risk will be reduced, the level of assurance will be increased and the time duration of audit will be reduced."}]}, {"question": "What are the steps to carry out analysis of variance", "positive_ctxs": [{"text": "We will run the ANOVA using the five-step approach.Set up hypotheses and determine level of significance. H0: \u03bc1 = \u03bc2 = \u03bc3 = \u03bc4 H1: Means are not all equal \u03b1=0.05.Select the appropriate test statistic. Set up decision rule. Compute the test statistic. Conclusion."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The various attempts to carry this out met with failure, from the crippling of Frege's project in his Grundgesetze by Russell's paradox, to the defeat of Hilbert's program by G\u00f6del's incompleteness theorems."}, {"text": "Since the rise of online communication, scholars have discussed how to adapt textual analysis techniques to study web-based content. The nature of online sources necessitates particular care in many of the steps of a content analysis compared to offline sources."}, {"text": "A data analytics approach can be used in order to predict energy consumption in buildings. The different steps of the data analysis process are carried out in order to realise smart buildings, where the building management and control operations including heating, ventilation, air conditioning, lighting and security are realised automatically by miming the needs of the building users and optimising resources like energy and time."}, {"text": "It can be shown that this estimator is consistent (as n\u2192\u221e and T fixed), asymptotically normal and efficient. Its advantage is the presence of a closed-form formula for the estimator. However, it is only meaningful to carry out this analysis when individual observations are not available, only their aggregated counts"}, {"text": "It can be shown that this estimator is consistent (as n\u2192\u221e and T fixed), asymptotically normal and efficient. Its advantage is the presence of a closed-form formula for the estimator. However, it is only meaningful to carry out this analysis when individual observations are not available, only their aggregated counts"}, {"text": "Machine learning involves computers discovering how they can perform tasks without being explicitly programmed to do so. It involves computers learning from data provided so that they carry out certain tasks. For simple tasks assigned to computers, it is possible to program algorithms telling the machine how to execute all steps required to solve the problem at hand; on the computer's part, no learning is needed."}, {"text": "Machine learning involves computers discovering how they can perform tasks without being explicitly programmed to do so. It involves computers learning from data provided so that they carry out certain tasks. For simple tasks assigned to computers, it is possible to program algorithms telling the machine how to execute all steps required to solve the problem at hand; on the computer's part, no learning is needed."}]}, {"question": "What are the differences between supervised and unsupervised learning", "positive_ctxs": [{"text": "In a supervised learning model, the algorithm learns on a labeled dataset, providing an answer key that the algorithm can use to evaluate its accuracy on training data. An unsupervised model, in contrast, provides unlabeled data that the algorithm tries to make sense of by extracting features and patterns on its own."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The goals of learning are understanding and prediction. Learning falls into many categories, including supervised learning, unsupervised learning, online learning, and reinforcement learning. From the perspective of statistical learning theory, supervised learning is best understood."}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It could be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution"}, {"text": "Deep learning is being successfully applied to financial fraud detection and anti-money laundering. \"Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events\". The solution leverages both supervised learning techniques, such as the classification of suspicious transactions, and unsupervised learning, e.g."}, {"text": "Deep learning is being successfully applied to financial fraud detection and anti-money laundering. \"Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events\". The solution leverages both supervised learning techniques, such as the classification of suspicious transactions, and unsupervised learning, e.g."}, {"text": "Deep learning is being successfully applied to financial fraud detection and anti-money laundering. \"Deep anti-money laundering detection system can spot and recognize relationships and similarities between data and, further down the road, learn to detect anomalies or classify and predict specific events\". The solution leverages both supervised learning techniques, such as the classification of suspicious transactions, and unsupervised learning, e.g."}]}, {"question": "How do heuristics affect decision making", "positive_ctxs": [{"text": "A heuristic is a mental shortcut that allows people to solve problems and make judgments quickly and efficiently. These rule-of-thumb strategies shorten decision-making time and allow people to function without constantly stopping to think about their next course of action."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Many studies have been done to further look into affect heuristics and many have found that these heuristics shape our attitudes and opinions towards our decisions, especially risk perception. These studies demonstrate how affect is an important characteristic of the decision-making process in many different domains and aspects as well as how it can lead to a strong conditioner of preference. As demonstrated below, affect is independent of cognition which indicate that there are conditions where affect does not require cognition."}, {"text": "The multiple sequence model defines different contingency variables such as group composition, task structure, and conflict management approaches, which all affect group decision making. This model consists of 36 clusters for coding group communication and four cluster-sets, such as proposal growth, conflict, socio-emotional interests, and expressions of uncertainty. By coding group decision making processes, Poole identified a set of decision paths that are usually used by groups during decision making processes.This theory also consists of various tracks that define different stages of interpersonal communication, problem solving, and decision making that occur in group communication."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "While heuristics can be helpful in many situations, it can also lead to biases which can result in poor decision-making habits. Like other heuristics, the affect heuristic can provide efficient and adaptive responses, but relying on affect can also cause decisions to be misleading."}, {"text": "Heuristics and metaheuristics make few or no assumptions about the problem being optimized. Usually, heuristics do not guarantee that any optimal solution need be found. On the other hand, heuristics are used to find approximate solutions for many complicated optimization problems."}, {"text": "A group of 20 students spends between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability of the student passing the exam?"}, {"text": "A group of 20 students spends between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability of the student passing the exam?"}]}, {"question": "Is the distribution unimodal or multimodal", "positive_ctxs": [{"text": "A distribution with a single mode is said to be unimodal. A distribution with more than one mode is said to be bimodal, trimodal, etc., or in general, multimodal."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "As mentioned earlier, a unimodal distribution with zero value of skewness does not imply that this distribution is symmetric necessarily. However, a symmetric unimodal or multimodal distribution always has zero skewness."}, {"text": "As mentioned earlier, a unimodal distribution with zero value of skewness does not imply that this distribution is symmetric necessarily. However, a symmetric unimodal or multimodal distribution always has zero skewness."}, {"text": "The value of b for the uniform distribution is 5/9. This is also its value for the exponential distribution. Values greater than 5/9 may indicate a bimodal or multimodal distribution, though corresponding values can also result for heavily skewed unimodal distributions."}, {"text": "In statistics, a unimodal probability distribution or unimodal distribution is a probability distribution which has a single peak. The term \"mode\" in this context refers to any peak of the distribution, not just to the strict definition of mode which is usual in statistics."}, {"text": "Histogram shape-based methods in particular, but also many other thresholding algorithms, make certain assumptions about the image intensity probability distribution. The most common thresholding methods work on bimodal distributions, but algorithms have also been developed for unimodal distributions, multimodal distributions, and circular distributions."}, {"text": "The bounds on this inequality can also be sharpened if the distribution is both unimodal and symmetrical. An empirical distribution can be tested for symmetry with a number of tests including McWilliam's R*. It is known that the variance of a unimodal symmetrical distribution with finite support [a, b] is less than or equal to ( b \u2212 a )2 / 12.Let the distribution be supported on the finite interval [ \u2212N, N ] and the variance be finite."}, {"text": "Symmetry of the distribution decreases the inequality's bounds by a factor of 2 while unimodality sharpens the bounds by a factor of 4/9.Because the mean and the mode in a unimodal distribution differ by at most \u221a3 standard deviations at most 5% of a symmetrical unimodal distribution lies outside (2\u221a10 + 3\u221a3)/3 standard deviations of the mean (approximately 3.840 standard deviations). This is sharper than the bounds provided by the Chebyshev inequality (approximately 4.472 standard deviations)."}]}, {"question": "What are the uses of least square method", "positive_ctxs": [{"text": "The least squares method is a statistical procedure to find the best fit for a set of data points by minimizing the sum of the offsets or residuals of points from the plotted curve. Least squares regression is used to predict the behavior of dependent variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In case of a single regressor, fitted by least squares, R2 is the square of the Pearson product-moment correlation coefficient relating the regressor and the response variable. More generally, R2 is the square of the correlation between the constructed predictor and the response variable. With more than one regressor, the R2 can be referred to as the coefficient of multiple determination."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "The first clear and concise exposition of the method of least squares was published by Legendre in 1805. The technique is described as an algebraic procedure for fitting linear equations to data and Legendre demonstrates the new method by analyzing the same data as Laplace for the shape of the earth. The value of Legendre's method of least squares was immediately recognized by leading astronomers and geodesists of the time.In 1809 Carl Friedrich Gauss published his method of calculating the orbits of celestial bodies."}, {"text": "The first clear and concise exposition of the method of least squares was published by Legendre in 1805. The technique is described as an algebraic procedure for fitting linear equations to data and Legendre demonstrates the new method by analyzing the same data as Laplace for the shape of the earth. The value of Legendre's method of least squares was immediately recognized by leading astronomers and geodesists of the time.In 1809 Carl Friedrich Gauss published his method of calculating the orbits of celestial bodies."}, {"text": ".The last property is what differentiates the reals from the rationals (and from other more exotic ordered fields). For example, the set of rationals with square less than 2 has rational upper bounds (e.g., 1.42), but no rational least upper bound, because the square root of 2 is not rational."}, {"text": "IRLS is used to find the maximum likelihood estimates of a generalized linear model, and in robust regression to find an M-estimator, as a way of mitigating the influence of outliers in an otherwise normally-distributed data set. For example, by minimizing the least absolute errors rather than the least square errors."}, {"text": "In that work he claimed to have been in possession of the method of least squares since 1795. This naturally led to a priority dispute with Legendre. However, to Gauss's credit, he went beyond Legendre and succeeded in connecting the method of least squares with the principles of probability and to the normal distribution."}]}, {"question": "What is planning problem in AI", "positive_ctxs": [{"text": "The planning problem in Artificial Intelligence is about the decision making performed by intelligent creatures like robots, humans, or computer programs when trying to achieve some goal. In the following we discuss a number of ways of formalizing planning, and show how the planning problem can be solved automatically."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A solution for a conformant planning problem is a sequence of actions. Haslum and Jonsson have demonstrated that the problem of conformant planning is EXPSPACE-complete, and 2EXPTIME-complete when the initial situation is uncertain, and there is non-determinism in the actions outcomes."}, {"text": "A solution for a conformant planning problem is a sequence of actions. Haslum and Jonsson have demonstrated that the problem of conformant planning is EXPSPACE-complete, and 2EXPTIME-complete when the initial situation is uncertain, and there is non-determinism in the actions outcomes."}, {"text": "Michael L. Littman showed in 1998 that with branching actions, the planning problem becomes EXPTIME-complete. A particular case of contiguous planning is represented by FOND problems - for \"fully-observable and non-deterministic\". If the goal is specified in LTLf (linear time logic on finite trace) then the problem is always EXPTIME-complete and 2EXPTIME-complete if the goal is specified with LDLf."}, {"text": "Michael L. Littman showed in 1998 that with branching actions, the planning problem becomes EXPTIME-complete. A particular case of contiguous planning is represented by FOND problems - for \"fully-observable and non-deterministic\". If the goal is specified in LTLf (linear time logic on finite trace) then the problem is always EXPTIME-complete and 2EXPTIME-complete if the goal is specified with LDLf."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Deterministic planning was introduced with the STRIPS planning system, which is a hierarchical planner. Action names are ordered in a sequence and this is a plan for the robot. Hierarchical planning can be compared with an automatic generated behavior tree."}, {"text": "Deterministic planning was introduced with the STRIPS planning system, which is a hierarchical planner. Action names are ordered in a sequence and this is a plan for the robot. Hierarchical planning can be compared with an automatic generated behavior tree."}]}, {"question": "What is the metric used by ordinary least squares OLS to determine the best fit line", "positive_ctxs": [{"text": "In order to fit the best intercept line between the points in the above scatter plots, we use a metric called \u201cSum of Squared Errors\u201d (SSE) and compare the lines to find out the best fit by reducing errors."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This is one of the assumptions under which the Gauss\u2013Markov theorem applies and ordinary least squares (OLS) gives the best linear unbiased estimator (\"BLUE\"). Homoscedasticity is not required for the coefficient estimates to be unbiased, consistent, and asymptotically normal, but it is required for OLS to be efficient. It is also required for the standard errors of the estimates to be unbiased and consistent, so it is required for accurate hypothesis testing, e.g."}, {"text": "This is one of the assumptions under which the Gauss\u2013Markov theorem applies and ordinary least squares (OLS) gives the best linear unbiased estimator (\"BLUE\"). Homoscedasticity is not required for the coefficient estimates to be unbiased, consistent, and asymptotically normal, but it is required for OLS to be efficient. It is also required for the standard errors of the estimates to be unbiased and consistent, so it is required for accurate hypothesis testing, e.g."}, {"text": "It is common to make the additional stipulation that the ordinary least squares (OLS) method should be used: the accuracy of each predicted value is measured by its squared residual (vertical distance between the point of the data set and the fitted line), and the goal is to make the sum of these squared deviations as small as possible. Other regression methods that can be used in place of ordinary least squares include least absolute deviations (minimizing the sum of absolute values of residuals) and the Theil\u2013Sen estimator (which chooses a line whose slope is the median of the slopes determined by pairs of sample points). Deming regression (total least squares) also finds a line that fits a set of two-dimensional sample points, but (unlike ordinary least squares, least absolute deviations, and median slope regression) it is not really an instance of simple linear regression, because it does not separate the coordinates into one dependent and one independent variable and could potentially return a vertical line as its fit."}, {"text": "It is common to make the additional stipulation that the ordinary least squares (OLS) method should be used: the accuracy of each predicted value is measured by its squared residual (vertical distance between the point of the data set and the fitted line), and the goal is to make the sum of these squared deviations as small as possible. Other regression methods that can be used in place of ordinary least squares include least absolute deviations (minimizing the sum of absolute values of residuals) and the Theil\u2013Sen estimator (which chooses a line whose slope is the median of the slopes determined by pairs of sample points). Deming regression (total least squares) also finds a line that fits a set of two-dimensional sample points, but (unlike ordinary least squares, least absolute deviations, and median slope regression) it is not really an instance of simple linear regression, because it does not separate the coordinates into one dependent and one independent variable and could potentially return a vertical line as its fit."}, {"text": "is alleviated by adding positive elements to the diagonals, thereby decreasing its condition number. Analogous to the ordinary least squares estimator, the simple ridge estimator is then given by"}, {"text": "is alleviated by adding positive elements to the diagonals, thereby decreasing its condition number. Analogous to the ordinary least squares estimator, the simple ridge estimator is then given by"}, {"text": "In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable (values of the variable being observed) in the given dataset and those predicted by the linear function of the independent variable."}]}, {"question": "What is Homoscedasticity in statistics", "positive_ctxs": [{"text": "In statistics, a sequence (or a vector) of random variables is homoscedastic /\u02ccho\u028amo\u028ask\u0259\u02c8d\u00e6st\u026ak/ if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}]}, {"question": "Which algorithm is best for sentiment analysis", "positive_ctxs": [{"text": "Overall, Sentiment analysis may involve the following types of classification algorithms:Linear Regression.Naive Bayes.Support Vector Machines.RNN derivatives LSTM and GRU."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This allows movement to a more sophisticated understanding of sentiment, because it is now possible to adjust the sentiment value of a concept relative to modifications that may surround it. Words, for example, that intensify, relax or negate the sentiment expressed by the concept can affect its score. Alternatively, texts can be given a positive and negative sentiment strength score if the goal is to determine the sentiment in a text rather than the overall polarity and strength of the text.There are various other types of sentiment analysis like- Aspect Based sentiment analysis, Grading sentiment analysis (positive,negative,neutral), Multilingual sentiment analysis and detection of emotions."}, {"text": "In general, the utility for practical commercial tasks of sentiment analysis as it is defined in academic research has been called into question, mostly since the simple one-dimensional model of sentiment from negative to positive yields rather little actionable information for a client worrying about the effect of public discourse on e.g. brand or corporate reputation.To better fit market needs, evaluation of sentiment analysis has moved to more task-based measures, formulated together with representatives from PR agencies and market research professionals. the RepLab evaluation data set is less on the content of the text under consideration and more on the effect of the text in question on brand reputation.Because evaluation of sentiment analysis is becoming more and more task based, each implementation needs a separate training model to get a more accurate representation of sentiment for a given data set."}, {"text": "Clearly, the high evaluated item should be recommended to the user. Based on these two motivations, a combination ranking score of similarity and sentiment rating can be constructed for each candidate item.Except for the difficulty of the sentiment analysis itself, applying sentiment analysis on reviews or feedback also faces the challenge of spam and biased reviews. One direction of work is focused on evaluating the helpfulness of each review."}, {"text": "Newton's method requires the 2nd order derivatives, so for each iteration, the number of function calls is in the order of N\u00b2, but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself."}, {"text": "The CyberEmotions project, for instance, recently identified the role of negative emotions in driving social networks discussions.The problem is that most sentiment analysis algorithms use simple terms to express sentiment about a product or service. However, cultural factors, linguistic nuances, and differing contexts make it extremely difficult to turn a string of written text into a simple pro or con sentiment. The fact that humans often disagree on the sentiment of text illustrates how big a task it is for computers to get this right."}, {"text": "Also, the problem of sentiment analysis is non-monotonic in respect to sentence extension and stop-word substitution (compare THEY would not let my dog stay in this hotel vs I would not let my dog stay in this hotel). To address this issue a number of rule-based and reasoning-based approaches have been applied to sentiment analysis, including defeasible logic programming. Also, there is a number of tree traversal rules applied to syntactic parse tree to extract the topicality of sentiment in open domain setting."}, {"text": "A human analysis component is required in sentiment analysis, as automated systems are not able to analyze historical tendencies of the individual commenter, or the platform and are often classified incorrectly in their expressed sentiment. Automation impacts approximately 23% of comments that are correctly classified by humans. However, humans often disagree, and it is argued that the inter-human agreement provides an upper bound that automated sentiment classifiers can eventually reach.Sometimes, the structure of sentiments and topics is fairly complex."}]}, {"question": "Why are measures of dispersion used in addition to measures of central tendency", "positive_ctxs": [{"text": "While measures of central tendency are used to estimate \"normal\" values of a dataset, measures of dispersion are important for describing the spread of the data, or its variation around a central value. A proper description of a set of data should include both of these characteristics."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Some measures that are commonly used to describe a data set are measures of central tendency and measures of variability or dispersion. Measures of central tendency include the mean, median and mode, while measures of variability include the standard deviation (or variance), the minimum and maximum values of the variables, kurtosis and skewness."}, {"text": "The mode, median, and arithmetic mean are allowed to measure central tendency of interval variables, while measures of statistical dispersion include range and standard deviation. Since one can only divide by differences, one cannot define measures that require some ratios, such as the coefficient of variation. More subtly, while one can define moments about the origin, only central moments are meaningful, since the choice of origin is arbitrary."}, {"text": "The mode, median, and arithmetic mean are allowed to measure central tendency of interval variables, while measures of statistical dispersion include range and standard deviation. Since one can only divide by differences, one cannot define measures that require some ratios, such as the coefficient of variation. More subtly, while one can define moments about the origin, only central moments are meaningful, since the choice of origin is arbitrary."}, {"text": "The term central tendency dates from the late 1920s.The most common measures of central tendency are the arithmetic mean, the median, and the mode. A middle tendency can be calculated for either a finite set of values or for a theoretical distribution, such as the normal distribution. Occasionally authors use central tendency to denote \"the tendency of quantitative data to cluster around some central value."}, {"text": "The term central tendency dates from the late 1920s.The most common measures of central tendency are the arithmetic mean, the median, and the mode. A middle tendency can be calculated for either a finite set of values or for a theoretical distribution, such as the normal distribution. Occasionally authors use central tendency to denote \"the tendency of quantitative data to cluster around some central value."}, {"text": "The term central tendency dates from the late 1920s.The most common measures of central tendency are the arithmetic mean, the median, and the mode. A middle tendency can be calculated for either a finite set of values or for a theoretical distribution, such as the normal distribution. Occasionally authors use central tendency to denote \"the tendency of quantitative data to cluster around some central value."}, {"text": "In statistics, a central tendency (or measure of central tendency) is a central or typical value for a probability distribution. It may also be called a center or location of the distribution. Colloquially, measures of central tendency are often called averages."}]}, {"question": "What is likelihood function in logistic regression", "positive_ctxs": [{"text": "Logistic regression is a model for binary classification predictive modeling. Under this framework, a probability distribution for the target variable (class label) must be assumed and then a likelihood function defined that calculates the probability of observing the outcome given the input data and the model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Although the dependent variable in logistic regression is Bernoulli, the logit is on an unrestricted scale. The logit function is the link function in this kind of generalized linear model, i.e."}, {"text": "Although the dependent variable in logistic regression is Bernoulli, the logit is on an unrestricted scale. The logit function is the link function in this kind of generalized linear model, i.e."}, {"text": "Although the dependent variable in logistic regression is Bernoulli, the logit is on an unrestricted scale. The logit function is the link function in this kind of generalized linear model, i.e."}, {"text": "Logistic regression and other log-linear models are also commonly used in machine learning. A generalisation of the logistic function to multiple inputs is the softmax activation function, used in multinomial logistic regression."}, {"text": "Logistic regression and other log-linear models are also commonly used in machine learning. A generalisation of the logistic function to multiple inputs is the softmax activation function, used in multinomial logistic regression."}, {"text": "In a Bayesian statistics context, prior distributions are normally placed on the regression coefficients, usually in the form of Gaussian distributions. There is no conjugate prior of the likelihood function in logistic regression. When Bayesian inference was performed analytically, this made the posterior distribution difficult to calculate except in very low dimensions."}, {"text": "In a Bayesian statistics context, prior distributions are normally placed on the regression coefficients, usually in the form of Gaussian distributions. There is no conjugate prior of the likelihood function in logistic regression. When Bayesian inference was performed analytically, this made the posterior distribution difficult to calculate except in very low dimensions."}]}, {"question": "How does feedforward neural network work", "positive_ctxs": [{"text": "The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction\u2014forward\u2014from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A feedforward neural network is an artificial neural network wherein connections between the nodes do not form a cycle. As such, it is different from its descendant: recurrent neural networks."}, {"text": "A feedforward neural network is an artificial neural network wherein connections between the nodes do not form a cycle. As such, it is different from its descendant: recurrent neural networks."}, {"text": "The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction\u2014forward\u2014from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network."}, {"text": "A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs."}, {"text": "A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs."}, {"text": "A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs."}, {"text": "A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs."}]}, {"question": "How do you reduce the margin of error in statistics", "positive_ctxs": [{"text": "Increase the sample size. Often, the most practical way to decrease the margin of error is to increase the sample size. Reduce variability. The less that your data varies, the more precisely you can estimate a population parameter. Use a one-sided confidence interval. Lower the confidence level."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A 3% margin of error means that if the same procedure is used a large number of times, 95% of the time the true population average will be within the sample estimate plus or minus 3%. The margin of error can be reduced by using a larger sample, however if a pollster wishes to reduce the margin of error to 1% they would need a sample of around 10,000 people. In practice, pollsters need to balance the cost of a large sample against the reduction in sampling error and a sample size of around 500\u20131,000 is a typical compromise for political polls."}, {"text": "The margin of error is a statistic expressing the amount of random sampling error in the results of a survey. The larger the margin of error, the less confidence one should have that a poll result would reflect the result of a survey of the entire population. The margin of error will be positive whenever a population is incompletely sampled and the outcome measure has positive variance, which is to say, the measure varies."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "One example is the percent of people who prefer product A versus product B. When a single, global margin of error is reported for a survey, it refers to the maximum margin of error for all reported percentages using the full sample from the survey. If the statistic is a percentage, this maximum margin of error can be calculated as the radius of the confidence interval for a reported percentage of 50%."}, {"text": "the combined effect of that and precision.A common convention in science and engineering is to express accuracy and/or precision implicitly by means of significant figures. Where not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. For instance, a recording of 843.6 m, or 843.0 m, or 800.0 m would imply a margin of 0.05 m (the last significant place is the tenths place), while a recording of 843 m would imply a margin of error of 0.5 m (the last significant digits are the units)."}, {"text": "the combined effect of that and precision.A common convention in science and engineering is to express accuracy and/or precision implicitly by means of significant figures. Where not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. For instance, a recording of 843.6 m, or 843.0 m, or 800.0 m would imply a margin of 0.05 m (the last significant place is the tenths place), while a recording of 843 m would imply a margin of error of 0.5 m (the last significant digits are the units)."}, {"text": "The comments should encourage the student to think about the effects of his or her actions on others\u2014-a strategy that in effect encourages the student to consider the ethical implications of the actions (Gibbs, 2003). Instead of simply saying, \"When you cut in line ahead of the other kids, that was not fair to them\", the teacher can try asking, \"How do you think the other kids feel when you cut in line ahead of them?\""}]}, {"question": "What is Bayesian network in machine learning", "positive_ctxs": [{"text": "A Bayesian network is a compact, flexible and interpretable representation of a joint probability distribution. It is also an useful tool in knowledge discovery as directed acyclic graphs allow representing causal relations between variables. Typically, a Bayesian network is learned from data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Automatically learning the graph structure of a Bayesian network (BN) is a challenge pursued within machine learning. The basic idea goes back to a recovery algorithm developed by Rebane and Pearl and rests on the distinction between the three possible patterns allowed in a 3-node DAG:"}, {"text": "Automatically learning the graph structure of a Bayesian network (BN) is a challenge pursued within machine learning. The basic idea goes back to a recovery algorithm developed by Rebane and Pearl and rests on the distinction between the three possible patterns allowed in a 3-node DAG:"}, {"text": "At about the same time, Roth proved that exact inference in Bayesian networks is in fact #P-complete (and thus as hard as counting the number of satisfying assignments of a conjunctive normal form formula (CNF) and that approximate inference within a factor 2n1\u2212\u025b for every \u025b > 0, even for Bayesian networks with restricted architecture, is NP-hard.In practical terms, these complexity results suggested that while Bayesian networks were rich representations for AI and machine learning applications, their use in large real-world applications would need to be tempered by either topological structural constraints, such as na\u00efve Bayes networks, or by restrictions on the conditional probabilities. The bounded variance algorithm was the first provable fast approximation algorithm to efficiently approximate probabilistic inference in Bayesian networks with guarantees on the error approximation. This powerful algorithm required the minor restriction on the conditional probabilities of the Bayesian network to be bounded away from zero and one by 1/p(n) where p(n) was any polynomial on the number of nodes in the network n."}, {"text": "At about the same time, Roth proved that exact inference in Bayesian networks is in fact #P-complete (and thus as hard as counting the number of satisfying assignments of a conjunctive normal form formula (CNF) and that approximate inference within a factor 2n1\u2212\u025b for every \u025b > 0, even for Bayesian networks with restricted architecture, is NP-hard.In practical terms, these complexity results suggested that while Bayesian networks were rich representations for AI and machine learning applications, their use in large real-world applications would need to be tempered by either topological structural constraints, such as na\u00efve Bayes networks, or by restrictions on the conditional probabilities. The bounded variance algorithm was the first provable fast approximation algorithm to efficiently approximate probabilistic inference in Bayesian networks with guarantees on the error approximation. This powerful algorithm required the minor restriction on the conditional probabilities of the Bayesian network to be bounded away from zero and one by 1/p(n) where p(n) was any polynomial on the number of nodes in the network n."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}]}, {"question": "What is the T score in statistics", "positive_ctxs": [{"text": "A t score is one form of a standardized test statistic (the other you'll come across in elementary statistics is the z-score). The t score formula enables you to take an individual score and transform it into a standardized form>one which helps you to compare scores."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the period of oscillation T of a mass m attached to an ideal linear spring with spring constant k suspended in gravity of strength g? That period is the solution for T of some dimensionless equation in the variables T, m, k, and g."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}]}, {"question": "What is the cost function used in logistic regression", "positive_ctxs": [{"text": "We can call a Logistic Regression a Linear Regression model but the Logistic Regression uses a more complex cost function, this cost function can be defined as the 'Sigmoid function' or also known as the 'logistic function' instead of a linear function."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Logistic regression and other log-linear models are also commonly used in machine learning. A generalisation of the logistic function to multiple inputs is the softmax activation function, used in multinomial logistic regression."}, {"text": "Logistic regression and other log-linear models are also commonly used in machine learning. A generalisation of the logistic function to multiple inputs is the softmax activation function, used in multinomial logistic regression."}, {"text": "where f(X) is an analytic function in X. With this choice, the single-layer neural network is identical to the logistic regression model. This function has a continuous derivative, which allows it to be used in backpropagation."}, {"text": "where f(X) is an analytic function in X. With this choice, the single-layer neural network is identical to the logistic regression model. This function has a continuous derivative, which allows it to be used in backpropagation."}, {"text": "where f(X) is an analytic function in X. With this choice, the single-layer neural network is identical to the logistic regression model. This function has a continuous derivative, which allows it to be used in backpropagation."}, {"text": "Although the dependent variable in logistic regression is Bernoulli, the logit is on an unrestricted scale. The logit function is the link function in this kind of generalized linear model, i.e."}, {"text": "Although the dependent variable in logistic regression is Bernoulli, the logit is on an unrestricted scale. The logit function is the link function in this kind of generalized linear model, i.e."}]}, {"question": "Why is Bayesian inference", "positive_ctxs": [{"text": "Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data."}, {"text": "Formally, Bayesian inference is calibrated with reference to an explicitly stated utility, or loss function; the 'Bayes rule' is the one which maximizes expected utility, averaged over the posterior uncertainty. Formal Bayesian inference therefore automatically provides optimal decisions in a decision theoretic sense. Given assumptions, data and utility, Bayesian inference can be made for essentially any problem, although not every statistical inference need have a Bayesian interpretation."}, {"text": "Formally, Bayesian inference is calibrated with reference to an explicitly stated utility, or loss function; the 'Bayes rule' is the one which maximizes expected utility, averaged over the posterior uncertainty. Formal Bayesian inference therefore automatically provides optimal decisions in a decision theoretic sense. Given assumptions, data and utility, Bayesian inference can be made for essentially any problem, although not every statistical inference need have a Bayesian interpretation."}, {"text": "Bayesian inference has been applied in different Bioinformatics applications, including differential gene expression analysis. Bayesian inference is also used in a general cancer risk model, called CIRI (Continuous Individualized Risk Index), where serial measurements are incorporated to update a Bayesian model which is primarily built from prior knowledge."}, {"text": "A decision-theoretic justification of the use of Bayesian inference was given by Abraham Wald, who proved that every unique Bayesian procedure is admissible. Conversely, every admissible statistical procedure is either a Bayesian procedure or a limit of Bayesian procedures.Wald characterized admissible procedures as Bayesian procedures (and limits of Bayesian procedures), making the Bayesian formalism a central technique in such areas of frequentist inference as parameter estimation, hypothesis testing, and computing confidence intervals."}, {"text": "A decision-theoretic justification of the use of Bayesian inference (and hence of Bayesian probabilities) was given by Abraham Wald, who proved that every admissible statistical procedure is either a Bayesian procedure or a limit of Bayesian procedures. Conversely, every Bayesian procedure is admissible."}, {"text": "Analyses which are not formally Bayesian can be (logically) incoherent; a feature of Bayesian procedures which use proper priors (i.e. those integrable to one) is that they are guaranteed to be coherent. Some advocates of Bayesian inference assert that inference must take place in this decision-theoretic framework, and that Bayesian inference should not conclude with the evaluation and summarization of posterior beliefs."}]}, {"question": "How do you know if two samples are significantly different", "positive_ctxs": [{"text": "3.2 How to test for differences between samplesDecide on a hypothesis to test, often called the \u201cnull hypothesis\u201d (H0 ). In our case, the hypothesis is that there is no difference between sets of samples. Decide on a statistic to test the truth of the null hypothesis.Calculate the statistic.Compare it to a reference value to establish significance, the P-value."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute."}, {"text": "The following question was posed to Jeff Hawkins in September 2011 with regard to cortical learning algorithms: \"How do you know if the changes you are making to the model are good or not?\" To which Jeff's response was \"There are two categories for the answer: one is to look at neuroscience, and the other is methods for machine intelligence. In the neuroscience realm, there are many predictions that we can make, and those can be tested."}, {"text": "Economist Paul Krugman agrees mostly with the Rawlsian approach in that he would like to \"create the society each of us would want if we didn\u2019t know in advance who we\u2019d be\". Krugman elaborated: \"If you admit that life is unfair, and that there's only so much you can do about that at the starting line, then you can try to ameliorate the consequences of that unfairness\"."}, {"text": "Note that, since any rotation of a solution is also a solution, this makes interpreting the factors difficult. In this particular example, if we do not know beforehand that the two types of intelligence are uncorrelated, then we cannot interpret the two factors as the two different types of intelligence. Even if they are uncorrelated, we cannot tell which factor corresponds to verbal intelligence and which corresponds to mathematical intelligence without an outside argument."}, {"text": "Suppose you know that today's ball was red, but you have no information about yesterday's ball. The chance that tomorrow's ball will be red is 1/2. That's because the only two remaining outcomes for this random experiment are:"}]}, {"question": "Is sample mean equal to population mean", "positive_ctxs": [{"text": "Mean, variance, and standard deviation The mean of the sampling distribution of the sample mean will always be the same as the mean of the original non-normal distribution. In other words, the sample mean is equal to the population mean. where \u03c3 is population standard deviation and n is sample size."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For a random sample of N observations on the jth random variable, the sample mean's distribution itself has mean equal to the population mean"}, {"text": "For a finite population, the population mean of a property is equal to the arithmetic mean of the given property, while considering every member of the population. For example, the population mean height is equal to the sum of the heights of every individual\u2014divided by the total number of individuals. The sample mean may differ from the population mean, especially for small samples."}, {"text": "For a finite population, the population mean of a property is equal to the arithmetic mean of the given property, while considering every member of the population. For example, the population mean height is equal to the sum of the heights of every individual\u2014divided by the total number of individuals. The sample mean may differ from the population mean, especially for small samples."}, {"text": "For a finite population, the population mean of a property is equal to the arithmetic mean of the given property, while considering every member of the population. For example, the population mean height is equal to the sum of the heights of every individual\u2014divided by the total number of individuals. The sample mean may differ from the population mean, especially for small samples."}, {"text": "For a finite population, the population mean of a property is equal to the arithmetic mean of the given property, while considering every member of the population. For example, the population mean height is equal to the sum of the heights of every individual\u2014divided by the total number of individuals. The sample mean may differ from the population mean, especially for small samples."}, {"text": "For a finite population, the population mean of a property is equal to the arithmetic mean of the given property, while considering every member of the population. For example, the population mean height is equal to the sum of the heights of every individual\u2014divided by the total number of individuals. The sample mean may differ from the population mean, especially for small samples."}, {"text": "Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does not generally converge to the population mean, even though the law of large numbers states that the sample mean will converge for independent variables."}]}, {"question": "What is sampling distribution of mean with replacement", "positive_ctxs": [{"text": "In sampling with replacement the mean of all sample means equals the mean of the population: Whatever the shape of the population distribution, the distribution of sample means is approximately normal with better approximations as the sample size, n, increases."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "Sampling done without replacement is no longer independent, but still satisfies exchangeability, hence many results still hold. Further, for a small sample from a large population, sampling without replacement is approximately the same as sampling with replacement, since the probability of choosing the same individual twice is low."}, {"text": "Sampling done without replacement is no longer independent, but still satisfies exchangeability, hence many results still hold. Further, for a small sample from a large population, sampling without replacement is approximately the same as sampling with replacement, since the probability of choosing the same individual twice is low."}, {"text": "Sampling done without replacement is no longer independent, but still satisfies exchangeability, hence many results still hold. Further, for a small sample from a large population, sampling without replacement is approximately the same as sampling with replacement, since the probability of choosing the same individual twice is low."}, {"text": "Bootstrapping is a statistical method for estimating the sampling distribution of an estimator by sampling with replacement from the original sample, most often with the purpose of deriving robust estimates of standard errors and confidence intervals of a population parameter like a mean, median, proportion, odds ratio, correlation coefficient or regression coefficient. It has been called the plug-in principle, as it is the method of estimation of functionals of a population distribution by evaluating the same functionals at the empirical distribution based on a sample."}]}, {"question": "What are statistical models used for", "positive_ctxs": [{"text": "A statistical model is a mathematical representation (or mathematical model) of observed data. When data analysts apply various statistical models to the data they are investigating, they are able to understand and interpret the information more strategically."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "This leads naturally to the hidden Markov model (HMM), one of the most common statistical models used for sequence labeling. Other common models in use are the maximum entropy Markov model and conditional random field."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Ultimately, biological neuron models aim to explain the mechanisms underlying the operation of the nervous system. Modeling helps to analyze experimental data and address questions such as: How are the spikes of a neuron related to sensory stimulation or motor activity such as arm movements? What is the neural code used by the nervous system?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What is the underlying framework used to represent knowledge? Semantic networks were one of the first knowledge representation primitives. Also, data structures and algorithms for general fast search."}]}, {"question": "How do you know whether to use parametric or nonparametric", "positive_ctxs": [{"text": "If the mean more accurately represents the center of the distribution of your data, and your sample size is large enough, use a parametric test. If the median more accurately represents the center of the distribution of your data, use a nonparametric test even if you have a large sample size."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "It made no difference whether prices were voluntarily or involuntarily posted below the market clearing price. Scarcity resulted in either case. Price controls fail to achieve their proximate aim, which is to reduce prices paid by retail consumers, but such controls do manage to reduce supply.Nobel Memorial Prize winner Milton Friedman said, \"We economists don't know much, but we do know how to create a shortage."}, {"text": "Suppose the police officers then stop a driver at random to administer a breathalyzer test. It indicates that the driver is drunk. We assume you do not know anything else about them."}, {"text": "To conduct a Bayes linear analysis it is necessary to identify some values that you expect to know shortly by making measurements D and some future value which you would like to know B. Here D refers to a vector containing data and B to a vector containing quantities you would like to predict. For the following example B and D are taken to be two-dimensional vectors i.e."}, {"text": "How do neurons migrate to the proper position in the central and peripheral systems? We know from molecular biology that distinct parts of the nervous system release distinct chemical cues, from growth factors to hormones that modulate and influence the growth and development of functional connections between neurons."}, {"text": "Economist Paul Krugman agrees mostly with the Rawlsian approach in that he would like to \"create the society each of us would want if we didn\u2019t know in advance who we\u2019d be\". Krugman elaborated: \"If you admit that life is unfair, and that there's only so much you can do about that at the starting line, then you can try to ameliorate the consequences of that unfairness\"."}]}, {"question": "What is a one sided vs a two sided hypothesis test", "positive_ctxs": [{"text": "The Basics of a One-Tailed Test Hypothesis testing is run to determine whether a claim is true or not, given a population parameter. A test that is conducted to show whether the mean of the sample is significantly greater than and significantly less than the mean of a population is considered a two-tailed test."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In principle confidence intervals can be symmetrical or asymmetrical. An interval can be asymmetrical because it works as lower or upper bound for a parameter (left-sided interval or right sided interval), but it can also be asymmetrical because the two sided interval is built violating symmetry around the estimate. Sometimes the bounds for a confidence interval are reached asymptotically and these are used to approximate the true bounds."}, {"text": "In principle confidence intervals can be symmetrical or asymmetrical. An interval can be asymmetrical because it works as lower or upper bound for a parameter (left-sided interval or right sided interval), but it can also be asymmetrical because the two sided interval is built violating symmetry around the estimate. Sometimes the bounds for a confidence interval are reached asymptotically and these are used to approximate the true bounds."}, {"text": "In principle confidence intervals can be symmetrical or asymmetrical. An interval can be asymmetrical because it works as lower or upper bound for a parameter (left-sided interval or right sided interval), but it can also be asymmetrical because the two sided interval is built violating symmetry around the estimate. Sometimes the bounds for a confidence interval are reached asymptotically and these are used to approximate the true bounds."}, {"text": "In principle confidence intervals can be symmetrical or asymmetrical. An interval can be asymmetrical because it works as lower or upper bound for a parameter (left-sided interval or right sided interval), but it can also be asymmetrical because the two sided interval is built violating symmetry around the estimate. Sometimes the bounds for a confidence interval are reached asymptotically and these are used to approximate the true bounds."}, {"text": "In principle confidence intervals can be symmetrical or asymmetrical. An interval can be asymmetrical because it works as lower or upper bound for a parameter (left-sided interval or right sided interval), but it can also be asymmetrical because the two sided interval is built violating symmetry around the estimate. Sometimes the bounds for a confidence interval are reached asymptotically and these are used to approximate the true bounds."}, {"text": "A criminal trial can be regarded as either or both of two decision processes: guilty vs not guilty or evidence vs a threshold (\"beyond a reasonable doubt\"). In one view, the defendant is judged; in the other view the performance of the prosecution (which bears the burden of proof) is judged. A hypothesis test can be regarded as either a judgment of a hypothesis or as a judgment of evidence."}, {"text": "A criminal trial can be regarded as either or both of two decision processes: guilty vs not guilty or evidence vs a threshold (\"beyond a reasonable doubt\"). In one view, the defendant is judged; in the other view the performance of the prosecution (which bears the burden of proof) is judged. A hypothesis test can be regarded as either a judgment of a hypothesis or as a judgment of evidence."}]}, {"question": "How do you find the p value in a normal distribution", "positive_ctxs": [{"text": "The distribution for z is the standard normal distribution; it has a mean of 0 and a standard deviation of 1. For Ha: p \u2260 26, the P-value would be P(z \u2264 -1.83) + P(z \u2265 1.83) = 2 * P(z \u2264 -1.83). Regardless of Ha, z = (p\u0302 - p0) / sqrt(p0 * (1 - p0) / n), where z gives the number of standard deviations p\u0302 is from p0."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "The classroom mean score is 96, which is \u22122.47 standard error units from the population mean of 100. Looking up the z-score in a table of the standard normal distribution cumulative probability, we find that the probability of observing a standard normal value below \u22122.47 is approximately 0.5 \u2212 0.4932 = 0.0068. This is the one-sided p-value for the null hypothesis that the 55 students are comparable to a simple random sample from the population of all test-takers."}, {"text": "To find out if the mean salaries of the teachers in the North and South are statistically different from that of the teachers in the West (the comparison category), we have to find out if the slope coefficients of the regression result are statistically significant. For this, we need to consider the p values. The estimated slope coefficient for the North is not statistically significant as its p value is 23 percent; however, that of the South is statistically significant at the 5% level as its p value is only around 3.5 percent."}, {"text": "To find out if the mean salaries of the teachers in the North and South are statistically different from that of the teachers in the West (the comparison category), we have to find out if the slope coefficients of the regression result are statistically significant. For this, we need to consider the p values. The estimated slope coefficient for the North is not statistically significant as its p value is 23 percent; however, that of the South is statistically significant at the 5% level as its p value is only around 3.5 percent."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "How do you find the probability density function of a continuous random variable", "positive_ctxs": [{"text": "A certain continuous random variable has a probability density function (PDF) given by: f ( x ) = C x ( 1 \u2212 x ) 2 , f(x) = C x (1-x)^2, f(x)=Cx(1\u2212x)2, where x x x can be any number in the real interval [ 0 , 1 ] [0,1] [0,1]. Compute C C C using the normalization condition on PDFs."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. In other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0 (since there are an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would equal one sample compared to the other sample."}, {"text": "In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. In other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0 (since there are an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would equal one sample compared to the other sample."}, {"text": "In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. In other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0 (since there are an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would equal one sample compared to the other sample."}, {"text": "Instead of speaking of a probability mass function, we say that the probability density of X is 1/360. The probability of a subset of [0, 360) can be calculated by multiplying the measure of the set by 1/360. In general, the probability of a set for a given continuous random variable can be calculated by integrating the density over the given set."}, {"text": "Instead of speaking of a probability mass function, we say that the probability density of X is 1/360. The probability of a subset of [0, 360) can be calculated by multiplying the measure of the set by 1/360. In general, the probability of a set for a given continuous random variable can be calculated by integrating the density over the given set."}, {"text": "If the probability density function of a random variable (or vector) X is given as fX(x), it is possible (but often not necessary; see below) to calculate the probability density function of some variable Y = g(X). This is also called a \u201cchange of variable\u201d and is in practice used to generate a random variable of arbitrary shape fg(X) = fY using a known (for instance, uniform) random number generator."}, {"text": "If the probability density function of a random variable (or vector) X is given as fX(x), it is possible (but often not necessary; see below) to calculate the probability density function of some variable Y = g(X). This is also called a \u201cchange of variable\u201d and is in practice used to generate a random variable of arbitrary shape fg(X) = fY using a known (for instance, uniform) random number generator."}]}, {"question": "What is the non parametric equivalent of the linear regression", "positive_ctxs": [{"text": "There is no non-parametric form of any regression. Regression means you are assuming that a particular parameterized model generated your data, and trying to find the parameters. Non-parametric tests are test that make no assumptions about the model that generated your data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "of the predictors) is equivalent to the exponential function of the linear regression expression. This illustrates how the logit serves as a link function between the probability and the linear regression expression. Given that the logit ranges between negative and positive infinity, it provides an adequate criterion upon which to conduct linear regression and the logit is easily converted back into the odds.So we define odds of the dependent variable equaling a case (given some linear combination"}, {"text": "of the predictors) is equivalent to the exponential function of the linear regression expression. This illustrates how the logit serves as a link function between the probability and the linear regression expression. Given that the logit ranges between negative and positive infinity, it provides an adequate criterion upon which to conduct linear regression and the logit is easily converted back into the odds.So we define odds of the dependent variable equaling a case (given some linear combination"}]}, {"question": "How do you handle multicollinearity in regression modeling", "positive_ctxs": [{"text": "How to Deal with MulticollinearityRedesign the study to avoid multicollinearity. Increase sample size. Remove one or more of the highly-correlated independent variables. Define a new variable equal to a linear combination of the highly-correlated variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Leave the model as is, despite multicollinearity. The presence of multicollinearity doesn't affect the efficiency of extrapolating the fitted model to new data provided that the predictor variables follow the same pattern of multicollinearity in the new data as in the data on which the regression model is based."}, {"text": "Multicollinearity refers to a situation in which more than two explanatory variables in a multiple regression model are highly linearly related. We have perfect multicollinearity if, for example as in the equation above, the correlation"}, {"text": "PLS regression is particularly suited when the matrix of predictors has more variables than observations, and when there is multicollinearity among X values. By contrast, standard regression will fail in these cases (unless it is regularized)."}, {"text": "PLS regression is particularly suited when the matrix of predictors has more variables than observations, and when there is multicollinearity among X values. By contrast, standard regression will fail in these cases (unless it is regularized)."}, {"text": "Tikhonov regularization, named for Andrey Tikhonov, is a method of regularization of ill-posed problems. Ridge regression is a special case of Tikhonov regularization in which all parameters are regularized equally. Ridge regression is particularly useful to mitigate the problem of multicollinearity in linear regression, which commonly occurs in models with large numbers of parameters."}, {"text": "Tikhonov regularization, named for Andrey Tikhonov, is a method of regularization of ill-posed problems. Ridge regression is a special case of Tikhonov regularization in which all parameters are regularized equally. Ridge regression is particularly useful to mitigate the problem of multicollinearity in linear regression, which commonly occurs in models with large numbers of parameters."}]}, {"question": "What does an F ratio mean", "positive_ctxs": [{"text": "The F ratio is the ratio of two mean square values. If the null hypothesis is true, you expect F to have a value close to 1.0 most of the time. A large F ratio means that the variation among group means is more than you'd expect to see by chance."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The odds ratio p11p00 / p01p10 for this distribution does not depend on the value of f. This shows that the odds ratio (and consequently the log odds ratio) is invariant to non-random sampling based on one of the variables being studied. Note however that the standard error of the log odds ratio does depend on the value of f.This fact is exploited in two important situations:"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "They are conducted only after they have received health authority/ethics committee approval in the country where approval of the therapy is sought. These authorities are responsible for vetting the risk/benefit ratio of the trial - their approval does not mean that the therapy is 'safe' or effective, only that the trial may be conducted."}, {"text": "The signed curvature is not defined, as it depends on an orientation of the curve that is not provided by the implicit equation. Also, changing F into \u2013F does not change the curve, but changes the sign of the numerator if the absolute value is omitted in the preceding formula."}]}, {"question": "What do feature detectors detect", "positive_ctxs": [{"text": "The ability to detect certain types of stimuli, like movements, shape, and angles, requires specialized cells in the brain called feature detectors. Without these, it would be difficult, if not impossible, to detect a round object, like a baseball, hurdling toward you at 90 miles per hour."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Encyclopedia of Computer Science and Engineering. (summary and review of a number of feature detectors formulated based on a scale-space operations)"}, {"text": "For example, in the case of an anomaly detection domain the algorithm prepares a set of exemplar pattern detectors trained on normal (non-anomalous) patterns that model and detect unseen or anomalous patterns."}, {"text": "There are many computer vision algorithms that use feature detection as the initial step, so as a result, a very large number of feature detectors have been developed. These vary widely in the kinds of feature detected, the computational complexity and the repeatability."}, {"text": "Blobs provide a complementary description of image structures in terms of regions, as opposed to corners that are more point-like. Nevertheless, blob descriptors may often contain a preferred point (a local maximum of an operator response or a center of gravity) which means that many blob detectors may also be regarded as interest point operators. Blob detectors can detect areas in an image which are too smooth to be detected by a corner detector."}, {"text": "Say we are trying to detect faces. A constellation model would use smaller part detectors, for instance mouth, nose and eye detectors and make a judgment about whether an image has a face based on the relative positions in which the components fire."}, {"text": ", composed of the cluster centers. The Kadir Brady detector was chosen because it produces fewer, more salient regions, as opposed to feature detectors like multiscale Harris, which produces numerous, less significant regions."}]}, {"question": "What is NLU and NLP", "positive_ctxs": [{"text": "NLP is short for natural language processing while NLU is the shorthand for natural language understanding. Similarly named, the concepts both deal with the relationship between natural language (as in, what we as humans speak, not what computers understand) and artificial intelligence."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the sample size. How many units must be collected for the experiment to be generalisable and have enough power?"}, {"text": "Ronald J. Brachman; What IS-A is and isn't. An Analysis of Taxonomic Links in Semantic Networks; IEEE Computer, 16 (10); October 1983"}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}]}, {"question": "Which plot is required for cumulative frequency distribution", "positive_ctxs": [{"text": "A curve that represents the cumulative frequency distribution of grouped data on a graph is called a Cumulative Frequency Curve or an Ogive."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In the case of cumulative frequency there are only two possibilities: a certain reference value X is exceeded or it is not exceeded. The sum of frequency of exceedance and cumulative frequency is 1 or 100%. Therefore, the binomial distribution can be used in estimating the range of the random error."}, {"text": "The concept of the cumulative distribution function makes an explicit appearance in statistical analysis in two (similar) ways. Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The empirical distribution function is a formal direct estimate of the cumulative distribution function for which simple statistical properties can be derived and which can form the basis of various statistical hypothesis tests."}, {"text": "In statistics, cumulative distribution function (CDF)-based nonparametric confidence intervals are a general class of confidence intervals around statistical functionals of a distribution. To calculate these confidence intervals, all that is required is an"}, {"text": "If successful, the known equation is enough to report the frequency distribution and a table of data will not be required. Further, the equation helps interpolation and extrapolation. However, care should be taken with extrapolating a cumulative frequency distribution, because this may be a source of errors."}, {"text": "When a cumulative frequency distribution is derived from a record of data, it can be questioned if it can be used for predictions. For example, given a distribution of river discharges for the years 1950\u20132000, can this distribution be used to predict how often a certain river discharge will be exceeded in the years 2000\u201350?"}, {"text": "A graph of the cumulative probability of failures up to each time point is called the cumulative distribution function, or CDF. In survival analysis, the cumulative distribution function gives the probability that the survival time is less than or equal to a specific time, t."}, {"text": "One of the most popular application of cumulative distribution function is standard normal table, also called the unit normal table or Z table, is the value of cumulative distribution function of the normal distribution. It is very useful to use Z-table not only for probabilities below a value which is the original application of cumulative distribution function, but also above and/or between values on standard normal distribution, and it was further extended to any normal distribution."}]}, {"question": "What is depth in CNN", "positive_ctxs": [{"text": "Depth is the number of filters. Depth column (or fibre) is the set of neurons that are all pointing to the same receptive field. Stride has the objective of producing smaller output volumes spatially. For example, if a stride=2, the filter will shift by the amount of 2 pixels as it convolves around the input volume."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}, {"text": "It is common to periodically insert a pooling layer between successive convolutional layers (each one typically followed by a ReLU layer) in a CNN architecture. The pooling operation can be used as another form of translation invariance.The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2\u00d72 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations:"}]}, {"question": "What is bias function", "positive_ctxs": [{"text": "In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. When a biased estimator is used, bounds of the bias are calculated."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What constitutes narrow or wide limits of agreement or large or small bias is a matter of a practical assessment in each case."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}]}, {"question": "What is the difference between binomial Poisson and normal distributions", "positive_ctxs": [{"text": "Normal distribution describes continuous data which have a symmetric distribution, with a characteristic 'bell' shape. Binomial distribution describes the distribution of binary data from a finite sample. Poisson distribution describes the distribution of binary data from an infinite sample."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Well-known discrete probability distributions used in statistical modeling include the Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, and the negative binomial distribution. Additionally, the discrete uniform distribution is commonly used in computer programs that make equal-probability random selections between a number of choices."}, {"text": "Well-known discrete probability distributions used in statistical modeling include the Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, and the negative binomial distribution. Additionally, the discrete uniform distribution is commonly used in computer programs that make equal-probability random selections between a number of choices."}, {"text": "Well-known discrete probability distributions used in statistical modeling include the Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, and the negative binomial distribution. Additionally, the discrete uniform distribution is commonly used in computer programs that make equal-probability random selections between a number of choices."}, {"text": "Well-known discrete probability distributions used in statistical modeling include the Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, and the negative binomial distribution. Additionally, the discrete uniform distribution is commonly used in computer programs that make equal-probability random selections between a number of choices."}, {"text": "The negative binomial, along with the Poisson and binomial distributions, is a member of the (a,b,0) class of distributions. All three of these distributions are special cases of the Panjer distribution. They are also members of the Natural exponential family."}, {"text": "The negative binomial, along with the Poisson and binomial distributions, is a member of the (a,b,0) class of distributions. All three of these distributions are special cases of the Panjer distribution. They are also members of the Natural exponential family."}, {"text": "The negative binomial distribution also arises as a continuous mixture of Poisson distributions (i.e. a compound probability distribution) where the mixing distribution of the Poisson rate is a gamma distribution. That is, we can view the negative binomial as a Poisson(\u03bb) distribution, where \u03bb is itself a random variable, distributed as a gamma distribution with shape = r and scale \u03b8 = p/(1 \u2212 p) or correspondingly rate \u03b2 = (1 \u2212 p)/p."}]}, {"question": "How do you make a deep learning model from scratch", "positive_ctxs": [{"text": "How To Develop a Machine Learning Model From ScratchDefine adequately our problem (objective, desired outputs\u2026).Gather data.Choose a measure of success.Set an evaluation protocol and the different protocols available.Prepare the data (dealing with missing values, with categorial values\u2026).Spilit correctly the data.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Deep reinforcement learning (deep RL) is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of state space."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations. Multi-view deep learning has been applied for learning user preferences from multiple domains. The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks."}, {"text": "Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations. Multi-view deep learning has been applied for learning user preferences from multiple domains. The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks."}, {"text": "Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations. Multi-view deep learning has been applied for learning user preferences from multiple domains. The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks."}, {"text": "Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations. Multi-view deep learning has been applied for learning user preferences from multiple domains. The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks."}]}, {"question": "Why do we use maximum likelihood estimation", "positive_ctxs": [{"text": "We can use MLE in order to get more robust parameter estimates. Thus, MLE can be defined as a method for estimating population parameters (such as the mean and variance for Normal, rate (lambda) for Poisson, etc.) from sample data such that the probability (likelihood) of obtaining the observed data is maximized."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "One of the simplest non-trivial examples of estimation is the estimation of the maximum of a uniform distribution. It is used as a hands-on classroom exercise and to illustrate basic principles of estimation theory. Further, in the case of estimation based on a single sample, it demonstrates philosophical issues and possible misunderstandings in the use of maximum likelihood estimators and likelihood functions."}, {"text": "Empirical likelihood (EL) is an estimation method in statistics. Empirical likelihood estimates require fewer assumptions about the error distribution compared to similar methods like maximum likelihood. The estimation method requires that the data are independent and identically distributed (iid)."}, {"text": "Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value. However, like other estimation methods, maximum likelihood estimation possesses a number of attractive limiting properties: As the sample size increases to infinity, sequences of maximum likelihood estimators have these properties:"}, {"text": "Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value. However, like other estimation methods, maximum likelihood estimation possesses a number of attractive limiting properties: As the sample size increases to infinity, sequences of maximum likelihood estimators have these properties:"}, {"text": "Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value. However, like other estimation methods, maximum likelihood estimation possesses a number of attractive limiting properties: As the sample size increases to infinity, sequences of maximum likelihood estimators have these properties:"}, {"text": "Point estimation can be done within the AIC paradigm: it is provided by maximum likelihood estimation. Interval estimation can also be done within the AIC paradigm: it is provided by likelihood intervals. Hence, statistical inference generally can be done within the AIC paradigm."}, {"text": "Sometimes we can remove the nuisance parameters by considering a likelihood based on only part of the information in the data, for example by using the set of ranks rather than the numerical values. Another example occurs in linear mixed models, where considering a likelihood for the residuals only after fitting the fixed effects leads to residual maximum likelihood estimation of the variance components."}]}, {"question": "What is a hidden state in RNN", "positive_ctxs": [{"text": "An RNN has a looping mechanism that acts as a highway to allow information to flow from one step to the next. Passing Hidden State to next time step. This information is the hidden state, which is a representation of previous inputs. Let's run through an RNN use case to have a better understanding of how this works."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A hidden semi-Markov model (HSMM) is a statistical model with the same structure as a hidden Markov model except that the unobservable process is semi-Markov rather than Markov. This means that the probability of there being a change in the hidden state depends on the amount of time that has elapsed since entry into the current state. This is in contrast to hidden Markov models where there is a constant probability of changing state given survival in the state up to that time.For instance Sanson & Thomson (2001) modelled daily rainfall using a hidden semi-Markov model."}, {"text": "In the standard type of hidden Markov model considered here, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a categorical distribution) or continuous (typically from a Gaussian distribution). The parameters of a hidden Markov model are of two types, transition probabilities and emission probabilities (also known as output probabilities). The transition probabilities control the way the hidden state at time t is chosen given the hidden state at time"}, {"text": "At each discrete time increment, a linear operator is applied to the state to generate the new state, with some noise mixed in, and optionally some information from the controls on the system if they are known. Then, another linear operator mixed with more noise generates the observed outputs from the true (\"hidden\") state. The Kalman filter may be regarded as analogous to the hidden Markov model, with the key difference that the hidden state variables take values in a continuous space as opposed to a discrete state space as in the hidden Markov model."}, {"text": "In the hidden Markov models considered above, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a categorical distribution) or continuous (typically from a Gaussian distribution). Hidden Markov models can also be generalized to allow continuous state spaces. Examples of such models are those where the Markov process over hidden variables is a linear dynamical system, with a linear relationship among related variables and where all hidden and observed variables follow a Gaussian distribution."}, {"text": "The hidden state space is assumed to consist of one of N possible values, modelled as a categorical distribution. (See the section below on extensions for other possibilities.) This means that for each of the N possible states that a hidden variable at time t can be in, there is a transition probability from this state to each of the N possible states of the hidden variable at time"}, {"text": "The advantage of this type of model is that arbitrary features (i.e. functions) of the observations can be modeled, allowing domain-specific knowledge of the problem at hand to be injected into the model. Models of this sort are not limited to modeling direct dependencies between a hidden state and its associated observation; rather, features of nearby observations, of combinations of the associated observation and nearby observations, or in fact of arbitrary observations at any distance from a given hidden state can be included in the process used to determine the value of a hidden state."}, {"text": "A hidden Markov model is a Markov chain for which the state is only partially observable. In other words, observations are related to the state of the system, but they are typically insufficient to precisely determine the state. Several well-known algorithms for hidden Markov models exist."}]}, {"question": "What is the probability of a simple random sample", "positive_ctxs": [{"text": "In simple random sampling, each member of a population has an equal chance of being included in the sample. Also, each combination of members of the population has an equal chance of composing the sample. Those two properties are what defines simple random sampling."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The best way to avoid a biased or unrepresentative sample is to select a random sample, also known as a probability sample. A random sample is defined as a sample where each individual member of the population has a known, non-zero chance of being selected as part of the sample. Several types of random samples are simple random samples, systematic samples, stratified random samples, and cluster random samples."}, {"text": "In statistics, a simple random sample is a subset of individuals (a sample) chosen from a larger set (a population). Each individual is chosen randomly and entirely by chance, such that each individual has the same probability of being chosen at any stage during the sampling process, and each subset of k individuals has the same probability of being chosen for the sample as any other subset of k individuals. This process and technique is known as simple random sampling, and should not be confused with systematic random sampling."}, {"text": "In statistics, a simple random sample is a subset of individuals (a sample) chosen from a larger set (a population). Each individual is chosen randomly and entirely by chance, such that each individual has the same probability of being chosen at any stage during the sampling process, and each subset of k individuals has the same probability of being chosen for the sample as any other subset of k individuals. This process and technique is known as simple random sampling, and should not be confused with systematic random sampling."}, {"text": "In statistics, a simple random sample is a subset of individuals (a sample) chosen from a larger set (a population). Each individual is chosen randomly and entirely by chance, such that each individual has the same probability of being chosen at any stage during the sampling process, and each subset of k individuals has the same probability of being chosen for the sample as any other subset of k individuals. This process and technique is known as simple random sampling, and should not be confused with systematic random sampling."}, {"text": "Another way of stating things is that with probability 1 \u2212 0.014 = 0.986, a simple random sample of 55 students would have a mean test score within 4 units of the population mean. We could also say that with 98.6% confidence we reject the null hypothesis that the 55 test takers are comparable to a simple random sample from the population of test-takers."}, {"text": "In statistics, inferences are made about characteristics of a population by studying a sample of that population's individuals. In order to arrive at a sample that presents an unbiased estimate of the true characteristics of the population, statisticians often seek to study a simple random sample\u2014that is, a sample in which every individual in the population is equally likely to be included. The result of this is that every possible combination of individuals who could be chosen for the sample has an equal chance to be the sample that is selected (that is, the space of simple random samples of a given size from a given population is composed of equally likely outcomes)."}, {"text": "In statistics, inferences are made about characteristics of a population by studying a sample of that population's individuals. In order to arrive at a sample that presents an unbiased estimate of the true characteristics of the population, statisticians often seek to study a simple random sample\u2014that is, a sample in which every individual in the population is equally likely to be included. The result of this is that every possible combination of individuals who could be chosen for the sample has an equal chance to be the sample that is selected (that is, the space of simple random samples of a given size from a given population is composed of equally likely outcomes)."}]}, {"question": "Why is it called bootstrapping statistics", "positive_ctxs": [{"text": "jackknifing is calculation with data sets sampled randomly from the original data. Bootstrapping is similar to jackknifing except that the position chosen at random may include multiple copies of the same position, to form data sets of the same size as original, to preserve statistical properties of data sampling."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A type of computer simulation called discrete-event simulation represents the operation of a system as a chronological sequence of events. A technique called bootstrapping the simulation model is used, which bootstraps initial data points using a pseudorandom number generator to schedule an initial set of pending events, which schedule additional events, and with time, the distribution of event times approaches its steady state\u2014the bootstrapping behavior is overwhelmed by steady-state behavior."}, {"text": "An interesting fact is that the original wiki software was created in 1995, but it took at least another six years for large wiki-based collaborative projects to appear. Why did it take so long? One explanation is that the original wiki software lacked a selection operation and hence couldn't effectively support content evolution."}, {"text": "In statistics and machine learning, when one wants to infer a random variable with a set of variables, usually a subset is enough, and other variables are useless. Such a subset that contains all the useful information is called a Markov blanket. If a Markov blanket is minimal, meaning that it cannot drop any variable without losing information, it is called a Markov boundary."}, {"text": "Akaike information criterion (AIC) method of model selection, and a comparison with MML: Dowe, D.L. ; Gardner, S.; Oppy, G. (Dec 2007). Why Simplicity is no Problem for Bayesians\"."}, {"text": "When the theoretical distribution of a statistic of interest is complicated or unknown. Since the bootstrapping procedure is distribution-independent it provides an indirect method to assess the properties of the distribution underlying the sample and the parameters of interest that are derived from this distribution.When the sample size is insufficient for straightforward statistical inference. If the underlying distribution is well-known, bootstrapping provides a way to account for the distortions caused by the specific sample that may not be fully representative of the population.When power calculations have to be performed, and a small pilot sample is available."}, {"text": "and is conventionally called the partition function. (The Pitman\u2013Koopman theorem states that the necessary and sufficient condition for a sampling distribution to admit sufficient statistics of bounded dimension is that it have the general form of a maximum entropy distribution.)"}, {"text": "and is conventionally called the partition function. (The Pitman\u2013Koopman theorem states that the necessary and sufficient condition for a sampling distribution to admit sufficient statistics of bounded dimension is that it have the general form of a maximum entropy distribution.)"}]}, {"question": "How does gradient boosting work for classification", "positive_ctxs": [{"text": "Gradient boosted regression and classification is an additive training tree classification method where trees are build in series (iteratively) and compared to each other based on a mathematically derived score of splits. The trees are compared based on weighted leaf scores within each tree."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The idea of gradient boosting originated in the observation by Leo Breiman that boosting can be interpreted as an optimization algorithm on a suitable cost function. Explicit regression gradient boosting algorithms were subsequently developed by Jerome H. Friedman, simultaneously with the more general functional gradient boosting perspective of Llew Mason, Jonathan Baxter, Peter Bartlett and Marcus Frean."}, {"text": "The idea of gradient boosting originated in the observation by Leo Breiman that boosting can be interpreted as an optimization algorithm on a suitable cost function. Explicit regression gradient boosting algorithms were subsequently developed by Jerome H. Friedman, simultaneously with the more general functional gradient boosting perspective of Llew Mason, Jonathan Baxter, Peter Bartlett and Marcus Frean."}, {"text": "The idea of gradient boosting originated in the observation by Leo Breiman that boosting can be interpreted as an optimization algorithm on a suitable cost function. Explicit regression gradient boosting algorithms were subsequently developed by Jerome H. Friedman, simultaneously with the more general functional gradient boosting perspective of Llew Mason, Jonathan Baxter, Peter Bartlett and Marcus Frean."}, {"text": "The latter two papers introduced the view of boosting algorithms as iterative functional gradient descent algorithms. That is, algorithms that optimize a cost function over function space by iteratively choosing a function (weak hypothesis) that points in the negative gradient direction. This functional gradient view of boosting has led to the development of boosting algorithms in many areas of machine learning and statistics beyond regression and classification."}, {"text": "The latter two papers introduced the view of boosting algorithms as iterative functional gradient descent algorithms. That is, algorithms that optimize a cost function over function space by iteratively choosing a function (weak hypothesis) that points in the negative gradient direction. This functional gradient view of boosting has led to the development of boosting algorithms in many areas of machine learning and statistics beyond regression and classification."}, {"text": "The latter two papers introduced the view of boosting algorithms as iterative functional gradient descent algorithms. That is, algorithms that optimize a cost function over function space by iteratively choosing a function (weak hypothesis) that points in the negative gradient direction. This functional gradient view of boosting has led to the development of boosting algorithms in many areas of machine learning and statistics beyond regression and classification."}, {"text": "Gradient boosting can be used in the field of learning to rank. The commercial web search engines Yahoo and Yandex use variants of gradient boosting in their machine-learned ranking engines. Gradient boosting is also utilized in High Energy Physics in data analysis."}]}, {"question": "How was AlphaGo trained", "positive_ctxs": [{"text": "AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In May 2016, Google unveiled its own proprietary hardware \"tensor processing units\", which it stated had already been deployed in multiple internal projects at Google, including the AlphaGo match against Lee Sedol.In the Future of Go Summit in May 2017, DeepMind disclosed that the version of AlphaGo used in this Summit was AlphaGo Master, and revealed that it had measured the strength of different versions of the software. AlphaGo Lee, the version used against Lee, could give AlphaGo Fan, the version used in AlphaGo vs. Fan Hui, three stones, and AlphaGo Master was even three stones stronger."}, {"text": "The first three games were won by AlphaGo following resignations by Lee. However, Lee beat AlphaGo in the fourth game, winning by resignation at move 180. AlphaGo then continued to achieve a fourth win, winning the fifth game by resignation.The prize was US$1 million."}, {"text": "After retiring from competitive play, AlphaGo Master was succeeded by an even more powerful version known as AlphaGo Zero, which was completely self-taught without learning from human games. AlphaGo Zero was then generalized into a program known as AlphaZero, which played additional games, including chess and shogi. AlphaZero has in turn been succeeded by a program known as MuZero which learns without being taught the rules."}, {"text": "AlphaGo is a computer program that plays the board game Go. It was developed by DeepMind Technologies which was later acquired by Google. Subsequent versions of AlphaGo became increasingly powerful, including a version that competed under the name Master."}, {"text": "The AI engaged in reinforcement learning, playing against itself until it could anticipate its own moves and how those moves would affect the game's outcome. In the first three days AlphaGo Zero played 4.9 million games against itself in quick succession. It appeared to develop the skills required to beat top humans within just a few days, whereas the earlier AlphaGo took months of training to achieve the same level.For comparison, the researchers also trained a version of AlphaGo Zero using human games, AlphaGo Master, and found that it learned more quickly, but actually performed more poorly in the long run."}, {"text": "AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves. Once it had reached a certain degree of proficiency, it was trained further by being set to play large numbers of games against other instances of itself, using reinforcement learning to improve its play. To avoid \"disrespectfully\" wasting its opponent's time, the program is specifically programmed to resign if its assessment of win probability falls beneath a certain threshold; for the match against Lee, the resignation threshold was set to 20%."}, {"text": "In recognition of the victory, AlphaGo was awarded an honorary 9-dan by the Korea Baduk Association. The lead up and the challenge match with Lee Sedol were documented in a documentary film also titled AlphaGo, directed by Greg Kohs. It was chosen by Science as one of the Breakthrough of the Year runners-up on 22 December 2016.At the 2017 Future of Go Summit, the Master version of AlphaGo beat Ke Jie, the number one ranked player in the world at the time, in a three-game match, after which AlphaGo was awarded professional 9-dan by the Chinese Weiqi Association.After the match between AlphaGo and Ke Jie, DeepMind retired AlphaGo, while continuing AI research in other areas."}]}, {"question": "What is accuracy in confusion matrix", "positive_ctxs": [{"text": "Classification accuracy is the ratio of correct predictions to total predictions made. classification accuracy = correct predictions / total predictions. 1. classification accuracy = correct predictions / total predictions. It is often presented as a percentage by multiplying the result by 100."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A confusion matrix or \"matching matrix\" is often used as a tool to validate the accuracy of k-NN classification. More robust statistical methods such as likelihood-ratio test can also be applied."}, {"text": "A confusion matrix or \"matching matrix\" is often used as a tool to validate the accuracy of k-NN classification. More robust statistical methods such as likelihood-ratio test can also be applied."}, {"text": "A confusion matrix or \"matching matrix\" is often used as a tool to validate the accuracy of k-NN classification. More robust statistical methods such as likelihood-ratio test can also be applied."}, {"text": "A confusion matrix or \"matching matrix\" is often used as a tool to validate the accuracy of k-NN classification. More robust statistical methods such as likelihood-ratio test can also be applied."}, {"text": "A confusion matrix or \"matching matrix\" is often used as a tool to validate the accuracy of k-NN classification. More robust statistical methods such as likelihood-ratio test can also be applied."}, {"text": "certain advantages over the count matrix approach in this setting. The first advantage is improved accuracy when applied to large numbers of observations. The second advantage is that the Spearman's rank correlation coefficient can be"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}]}, {"question": "Why use a Manova instead of Anova", "positive_ctxs": [{"text": "MANOVA is useful in experimental situations where at least some of the independent variables are manipulated. It has several advantages over ANOVA. First, by measuring several dependent variables in a single experiment, there is a better chance of discovering which factor is truly important."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "\"The art of a right decision: Why decision makers want to know the odds-algorithm.\" Newsletter of the European Mathematical Society, Issue 62, 14\u201320, (2006)"}, {"text": "Akaike information criterion (AIC) method of model selection, and a comparison with MML: Dowe, D.L. ; Gardner, S.; Oppy, G. (Dec 2007). Why Simplicity is no Problem for Bayesians\"."}, {"text": "It is an anomaly for a small city to field such a good team. the soccer scores and great soccer team) indirectly described a condition by which the observer inferred a new meaningful pattern\u2014that the small city was no longer small. Why would you put a large city of your best and brightest in the middle of nowhere?"}, {"text": "Bayesian theory calls for the use of the posterior predictive distribution to do predictive inference, i.e., to predict the distribution of a new, unobserved data point. That is, instead of a fixed point as a prediction, a distribution over possible points is returned. Only this way is the entire posterior distribution of the parameter(s) used."}, {"text": "For example, some authors use colons or full stops instead of parentheses, or change the places in which parentheses are inserted. Each author's particular definition must be accompanied by a proof of unique readability."}, {"text": "Low-discrepancy sequences are often used instead of random sampling from a space as they ensure even coverage and normally have a faster order of convergence than Monte Carlo simulations using random or pseudorandom sequences. Methods based on their use are called quasi-Monte Carlo methods."}, {"text": "Low-discrepancy sequences are often used instead of random sampling from a space as they ensure even coverage and normally have a faster order of convergence than Monte Carlo simulations using random or pseudorandom sequences. Methods based on their use are called quasi-Monte Carlo methods."}]}, {"question": "What is multivariate analysis used for", "positive_ctxs": [{"text": "Essentially, multivariate analysis is a tool to find patterns and relationships between several variables simultaneously. It lets us predict the effect a change in one variable will have on other variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable. The application of multivariate statistics is multivariate analysis."}, {"text": "Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable. The application of multivariate statistics is multivariate analysis."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Multivariate Student-t distribution.The Inverse-Wishart distribution is important in Bayesian inference, for example in Bayesian multivariate linear regression. Additionally, Hotelling's T-squared distribution is a multivariate distribution, generalising Student's t-distribution, that is used in multivariate hypothesis testing."}, {"text": "Multivariate Student-t distribution.The Inverse-Wishart distribution is important in Bayesian inference, for example in Bayesian multivariate linear regression. Additionally, Hotelling's T-squared distribution is a multivariate distribution, generalising Student's t-distribution, that is used in multivariate hypothesis testing."}, {"text": "Sparse principal component analysis (sparse PCA) is a specialised technique used in statistical analysis and, in particular, in the analysis of multivariate data sets. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by introducing sparsity structures to the input variables."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}]}, {"question": "What is the best model for image classification", "positive_ctxs": [{"text": "7 Best Models for Image Classification using Keras1 Xception. It translates to \u201cExtreme Inception\u201d. 2 VGG16 and VGG19: This is a keras model with 16 and 19 layer network that has an input size of 224X224. 3 ResNet50. The ResNet architecture is another pre-trained model highly useful in Residual Neural Networks. 4 InceptionV3. 5 DenseNet. 6 MobileNet. 7 NASNet."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "COBWEB: is an incremental clustering technique that keeps a hierarchical clustering model in the form of a classification tree. For each new point COBWEB descends the tree, updates the nodes along the way and looks for the best node to put the point on (using a category utility function)."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "More specifically, ground truth may refer to a process in which a \"pixel\" on a satellite image is compared to what is there in reality (at the present time) in order to verify the contents of the \"pixel\" on the image (noting that the concept of a \"pixel\" is somewhat ill-defined). In the case of a classified image, it allows supervised classification to help determine the accuracy of the classification performed by the remote sensing software and therefore minimize errors in the classification such as errors of commission and errors of omission."}, {"text": "CNNs are often used in image recognition systems. In 2012 an error rate of 0.23% on the MNIST database was reported. Another paper on using CNN for image classification reported that the learning process was \"surprisingly fast\"; in the same paper, the best published results as of 2011 were achieved in the MNIST database and the NORB database."}, {"text": "CNNs are often used in image recognition systems. In 2012 an error rate of 0.23% on the MNIST database was reported. Another paper on using CNN for image classification reported that the learning process was \"surprisingly fast\"; in the same paper, the best published results as of 2011 were achieved in the MNIST database and the NORB database."}, {"text": "CNNs are often used in image recognition systems. In 2012 an error rate of 0.23% on the MNIST database was reported. Another paper on using CNN for image classification reported that the learning process was \"surprisingly fast\"; in the same paper, the best published results as of 2011 were achieved in the MNIST database and the NORB database."}]}, {"question": "How do you prove conditional probability", "positive_ctxs": [{"text": "Definition 1. Suppose that events A and B are defined on the same probability space, and the event B is such that P(B) > 0. The conditional probability of A given that B has occurred is given by P(A|B) = P(A \u2229 B)/P(B)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Of those that survive, at what rate will they die or fail? Can multiple causes of death or failure be taken into account? How do particular circumstances or characteristics increase or decrease the probability of survival?"}, {"text": "Of those that survive, at what rate will they die or fail? Can multiple causes of death or failure be taken into account? How do particular circumstances or characteristics increase or decrease the probability of survival?"}, {"text": "How high is the probability they really are drunk?Many would answer as high as 95%, but the correct probability is about 2%."}, {"text": "Because of their randomness, you may compute from the sample specific intervals containing the fixed \u03bc with a given probability that you denote confidence."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "What is message passing algorithm", "positive_ctxs": [{"text": "Message passing algorithm which is an iterative decoding algorithm factorizes the global function of many variables into product of simpler local functions, whose arguments are the subset of variables. In order to visualize this factorization we use factor graph."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Message Passing Interface: It is a cross-platform message passing programming interface for parallel computers. It defines the semantics of library functions to allow users to write portable message passing programs in C, C++ and Fortran."}, {"text": "Message Passing Interface: It is a cross-platform message passing programming interface for parallel computers. It defines the semantics of library functions to allow users to write portable message passing programs in C, C++ and Fortran."}, {"text": "If the graph is a chain or a tree, message passing algorithms yield exact solutions. The algorithms used in these cases are analogous to the forward-backward and Viterbi algorithm for the case of HMMs."}, {"text": "Decentralized algorithms are ones where no message passing is allowed (in contrast to distributed algorithms where local message passing takes places), and efficient decentralized algorithms exist that will color a graph if a proper coloring exists. These assume that a vertex is able to sense whether any of its neighbors are using the same color as the vertex i.e., whether a local conflict exists. This is a mild assumption in many applications e.g."}, {"text": "Asynchronous Agents Library \u2013 Microsoft actor library for Visual C++. \"The Agents Library is a C++ template library that promotes an actor-based programming model and in-process message passing for coarse-grained dataflow and pipelining tasks. \""}, {"text": "In the first step, messages are passed inwards: starting at the leaves, each node passes a message along the (unique) edge towards the root node. The tree structure guarantees that it is possible to obtain messages from all other adjoining nodes before passing the message on. This continues until the root has obtained messages from all of its adjoining nodes."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}]}, {"question": "Is the sum of two normal distributions normal", "positive_ctxs": [{"text": "This means that the sum of two independent normally distributed random variables is normal, with its mean being the sum of the two means, and its variance being the sum of the two variances (i.e., the square of the standard deviation is the sum of the squares of the standard deviations)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If the means of the two normal distributions are equal, then the combined distribution is unimodal. Conditions for unimodality of the combined distribution were derived by Eisenberger. Necessary and sufficient conditions for a mixture of normal distributions to be bimodal have been identified by Ray and Lindsay.A mixture of two approximately equal mass normal distributions has a negative kurtosis since the two modes on either side of the center of mass effectively reduces the tails of the distribution."}, {"text": "It is not uncommon to encounter situations where an investigator believes that the data comes from a mixture of two normal distributions. Because of this, this mixture has been studied in some detail.A mixture of two normal distributions has five parameters to estimate: the two means, the two variances and the mixing parameter. A mixture of two normal distributions with equal standard deviations is bimodal only if their means differ by at least twice the common standard deviation."}, {"text": "Wang's indexThe bimodality index proposed by Wang et al assumes that the distribution is a sum of two normal distributions with equal variances but differing means. It is defined as follows:"}, {"text": "Two normal distributionsA package for R is available for testing for bimodality. This package assumes that the data are distributed as a sum of two normal distributions. If this assumption is not correct the results may not be reliable."}, {"text": "The split normal distribution is most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one. The truncated normal distribution results from rescaling a section of a single density function."}, {"text": "The split normal distribution is most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one. The truncated normal distribution results from rescaling a section of a single density function."}, {"text": "The split normal distribution is most directly defined in terms of joining scaled sections of the density functions of different normal distributions and rescaling the density to integrate to one. The truncated normal distribution results from rescaling a section of a single density function."}]}, {"question": "What is post processing in machine learning", "positive_ctxs": [{"text": "Postprocessing procedures usually include various pruning routines, rule quality processing, rule filtering, rule combination, model combination, or even knowledge integration. All these procedures provide a kind of symbolic filter for noisy, imprecise, or non-user-friendly knowledge derived by an inductive algorithm."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}, {"text": "Zero-shot learning (ZSL) is a problem setup in machine learning, where at test time, a learner observes samples from classes that were not observed during training, and needs to predict the category they belong to. This problem is widely studied in computer vision, natural language processing and machine perception."}, {"text": "What kind of graph is used depends on the application. For example, in natural language processing, linear chain CRFs are popular, which implement sequential dependencies in the predictions. In image processing the graph typically connects locations to nearby and/or similar locations to enforce that they receive similar predictions."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "What is the difference between population variance and sample variance", "positive_ctxs": [{"text": "Summary: Population variance refers to the value of variance that is calculated from population data, and sample variance is the variance calculated from sample data. As a result both variance and standard deviation derived from sample data are more than those found out from population data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Difference between Z-test and t-test: Z-test is used when sample size is large (n>50), or the population variance is known. t-test is used when sample size is small (n<50) and population variance is unknown."}, {"text": "The simplest estimators for population mean and population variance are simply the mean and variance of the sample, the sample mean and (uncorrected) sample variance \u2013 these are consistent estimators (they converge to the correct value as the number of samples increases), but can be improved. Estimating the population variance by taking the sample's variance is close to optimal in general, but can be improved in two ways. Most simply, the sample variance is computed as an average of squared deviations about the (sample) mean, by dividing by n. However, using values other than n improves the estimator in various ways."}, {"text": "The simplest estimators for population mean and population variance are simply the mean and variance of the sample, the sample mean and (uncorrected) sample variance \u2013 these are consistent estimators (they converge to the correct value as the number of samples increases), but can be improved. Estimating the population variance by taking the sample's variance is close to optimal in general, but can be improved in two ways. Most simply, the sample variance is computed as an average of squared deviations about the (sample) mean, by dividing by n. However, using values other than n improves the estimator in various ways."}, {"text": "The simplest estimators for population mean and population variance are simply the mean and variance of the sample, the sample mean and (uncorrected) sample variance \u2013 these are consistent estimators (they converge to the correct value as the number of samples increases), but can be improved. Estimating the population variance by taking the sample's variance is close to optimal in general, but can be improved in two ways. Most simply, the sample variance is computed as an average of squared deviations about the (sample) mean, by dividing by n. However, using values other than n improves the estimator in various ways."}, {"text": "Mathematically, the variance of the sampling distribution obtained is equal to the variance of the population divided by the sample size. This is because as the sample size increases, sample means cluster more closely around the population mean."}, {"text": "Mathematically, the variance of the sampling distribution obtained is equal to the variance of the population divided by the sample size. This is because as the sample size increases, sample means cluster more closely around the population mean."}, {"text": "Mathematically, the variance of the sampling distribution obtained is equal to the variance of the population divided by the sample size. This is because as the sample size increases, sample means cluster more closely around the population mean."}]}, {"question": "What is filter method in feature selection", "positive_ctxs": [{"text": "Filter methods measure the relevance of features by their correlation with dependent variable while wrapper methods measure the usefulness of a subset of feature by actually training a model on it. Filter methods are much faster compared to wrapper methods as they do not involve training the models."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The optimal solution to the filter feature selection problem is the Markov blanket of the target node, and in a Bayesian Network, there is a unique Markov Blanket for each node."}, {"text": "The optimal solution to the filter feature selection problem is the Markov blanket of the target node, and in a Bayesian Network, there is a unique Markov Blanket for each node."}, {"text": "proposed a feature selection method that can use either mutual information, correlation, or distance/similarity scores to select features. The aim is to penalise a feature's relevancy by its redundancy in the presence of the other selected features. The relevance of a feature set S for the class c is defined by the average value of all mutual information values between the individual feature fi and the class c as follows:"}, {"text": "proposed a feature selection method that can use either mutual information, correlation, or distance/similarity scores to select features. The aim is to penalise a feature's relevancy by its redundancy in the presence of the other selected features. The relevance of a feature set S for the class c is defined by the average value of all mutual information values between the individual feature fi and the class c as follows:"}, {"text": "This is a survey of the application of feature selection metaheuristics lately used in the literature. This survey was realized by J. Hammon in her 2013 thesis."}, {"text": "This is a survey of the application of feature selection metaheuristics lately used in the literature. This survey was realized by J. Hammon in her 2013 thesis."}, {"text": "enhanced generalization by reducing overfitting (formally, reduction of variance)The central premise when using a feature selection technique is that the data contains some features that are either redundant or irrelevant, and can thus be removed without incurring much loss of information. Redundant and irrelevant are two distinct notions, since one relevant feature may be redundant in the presence of another relevant feature with which it is strongly correlated.Feature selection techniques should be distinguished from feature extraction. Feature extraction creates new features from functions of the original features, whereas feature selection returns a subset of the features."}]}, {"question": "What does a normal distribution model", "positive_ctxs": [{"text": "Normal distribution, also known as the Gaussian distribution, is a probability distribution that is symmetric about the mean, showing that data near the mean are more frequent in occurrence than data far from the mean. In graph form, normal distribution will appear as a bell curve."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Under certain assumptions, the OLS estimator has a normal asymptotic distribution when properly normalized and centered (even when the data does not come from a normal distribution). This result is used to justify using a normal distribution, or a chi square distribution (depending on how the test statistic is calculated), when conducting a hypothesis test. This holds even under heteroscedasticity."}, {"text": "Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors."}, {"text": "Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors."}, {"text": "Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors."}, {"text": "Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors."}, {"text": "Measurement errors in physical experiments are often modeled by a normal distribution. This use of a normal distribution does not imply that one is assuming the measurement errors are normally distributed, rather using the normal distribution produces the most conservative predictions possible given only knowledge about the mean and variance of the errors."}, {"text": "Any hypothesis which does not specify the population distribution completely. Example: A hypothesis specifying a normal distribution with a specified mean and an unspecified variance.The simple/composite distinction was made by Neyman and Pearson."}]}, {"question": "When is the t test preferred to the Z test", "positive_ctxs": [{"text": "Generally, z-tests are used when we have large sample sizes (n > 30), whereas t-tests are most helpful with a smaller sample size (n < 30). Both methods assume a normal distribution of the data, but the z-tests are most useful when the standard deviation is known."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Welch's t test assumes the least and is therefore the most commonly used test in a two-sample hypothesis test where the mean of a metric is to be optimized. While the mean of the variable to be optimized is the most common choice of estimator, others are regularly used."}, {"text": "Welch's t test assumes the least and is therefore the most commonly used test in a two-sample hypothesis test where the mean of a metric is to be optimized. While the mean of the variable to be optimized is the most common choice of estimator, others are regularly used."}, {"text": "Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test. When there are only two means to compare, the t-test and the ANOVA F-test are equivalent; the relation between ANOVA and t is given by F = t2."}, {"text": "Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test. When there are only two means to compare, the t-test and the ANOVA F-test are equivalent; the relation between ANOVA and t is given by F = t2."}, {"text": "Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test. When there are only two means to compare, the t-test and the ANOVA F-test are equivalent; the relation between ANOVA and t is given by F = t2."}, {"text": "Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test. When there are only two means to compare, the t-test and the ANOVA F-test are equivalent; the relation between ANOVA and t is given by F = t2."}, {"text": "If we wish to test the hypothesis that the population odds ratio equals one, the two-sided p-value is 2P(Z < \u2212|L|/SE), where P denotes a probability, and Z denotes a standard normal random variable."}]}, {"question": "What are the problems of machine learning", "positive_ctxs": [{"text": "Here are 5 common machine learning problems and how you can overcome them.1) Understanding Which Processes Need Automation. 2) Lack of Quality Data. 3) Inadequate Infrastructure. 4) Implementation. 5) Lack of Skilled Resources."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Other typical problems of the back-propagation algorithm are the speed of convergence and the possibility of ending up in a local minimum of the error function. Today there are practical methods that make back-propagation in multi-layer perceptrons the tool of choice for many machine learning tasks."}, {"text": "Other typical problems of the back-propagation algorithm are the speed of convergence and the possibility of ending up in a local minimum of the error function. Today there are practical methods that make back-propagation in multi-layer perceptrons the tool of choice for many machine learning tasks."}, {"text": "The only way to collect information about the environment is to interact with it.The first two of these problems could be considered planning problems (since some form of model is available), while the last one could be considered to be a genuine learning problem. However, reinforcement learning converts both planning problems to machine learning problems."}, {"text": "The only way to collect information about the environment is to interact with it.The first two of these problems could be considered planning problems (since some form of model is available), while the last one could be considered to be a genuine learning problem. However, reinforcement learning converts both planning problems to machine learning problems."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "In addition, machine learning has been applied to systems biology problems such as identifying transcription factor binding sites using a technique known as Markov chain optimization. Genetic algorithms, machine learning techniques which are based on the natural process of evolution, have been used to model genetic networks and regulatory structures.Other systems biology applications of machine learning include the task of enzyme function prediction, high throughput microarray data analysis, analysis of genome-wide association studies to better understand markers of disease, protein function prediction."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}]}, {"question": "How do you do change detection ENVI", "positive_ctxs": [{"text": "Select a File for Image ChangeFrom the Toolbox, select Change Detection > Image Change Workflow. Select an input file from the File Selection dialog. To apply a mask, select the Input Mask tab in the File Selection panel. Select the Input Files tab again.Enter the path and filename for the Time 2 File. Click Next."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}]}, {"question": "What is the sampling distribution of the means and why is it useful", "positive_ctxs": [{"text": "The sampling distribution of the sample mean is very useful because it can tell us the probability of getting any specific mean from a random sample."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "As long as the starting point is randomized, systematic sampling is a type of probability sampling. It is easy to implement and the stratification induced can make it efficient, if the variable by which the list is ordered is correlated with the variable of interest. 'Every 10th' sampling is especially useful for efficient sampling from databases."}, {"text": "As long as the starting point is randomized, systematic sampling is a type of probability sampling. It is easy to implement and the stratification induced can make it efficient, if the variable by which the list is ordered is correlated with the variable of interest. 'Every 10th' sampling is especially useful for efficient sampling from databases."}, {"text": "degrees of freedom is the sampling distribution of the t-value when the samples consist of independent identically distributed observations from a normally distributed population. Thus for inference purposes t is a useful \"pivotal quantity\" in the case when the mean and variance"}, {"text": "On the other hand, it is possible to approximate the quantile function of the normal distribution extremely accurately using moderate-degree polynomials, and in fact the method of doing this is fast enough that inversion sampling is now the default method for sampling from a normal distribution in the statistical package R."}]}, {"question": "When should Fisher's exact test be used", "positive_ctxs": [{"text": "While the chi-squared test relies on an approximation, Fisher's exact test is one of exact tests. Especially when more than 20% of cells have expected frequencies < 5, we need to use Fisher's exact test because applying approximation method is inadequate."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "An alternative exact test, Barnard's exact test, has been developed and proponents of it suggest that this method is more powerful, particularly in 2\u00d72 tables. Furthermore, Boschloo's test is an exact test that is uniformly more powerful than Fisher's exact test by construction. Another alternative is to use maximum likelihood estimates to calculate a p-value from the exact binomial or multinomial distributions and reject or fail to reject based on the p-value.For stratified categorical data the Cochran\u2013Mantel\u2013Haenszel test must be used instead of Fisher's test."}, {"text": "An alternative exact test, Barnard's exact test, has been developed and proponents of it suggest that this method is more powerful, particularly in 2\u00d72 tables. Furthermore, Boschloo's test is an exact test that is uniformly more powerful than Fisher's exact test by construction. Another alternative is to use maximum likelihood estimates to calculate a p-value from the exact binomial or multinomial distributions and reject or fail to reject based on the p-value.For stratified categorical data the Cochran\u2013Mantel\u2013Haenszel test must be used instead of Fisher's test."}, {"text": "The test based on the hypergeometric distribution (hypergeometric test) is identical to the corresponding one-tailed version of Fisher's exact test. Reciprocally, the p-value of a two-sided Fisher's exact test can be calculated as the sum of two appropriate hypergeometric tests (for more information see)."}, {"text": "In contrast to permutation tests, the distributions underlying many popular \"classical\" statistical tests, such as the t-test, F-test, z-test, and \u03c72 test, are obtained from theoretical probability distributions. Fisher's exact test is an example of a commonly used permutation test for evaluating the association between two dichotomous variables. When sample sizes are very large, the Pearson's chi-square test will give accurate results."}, {"text": "In contrast to permutation tests, the distributions underlying many popular \"classical\" statistical tests, such as the t-test, F-test, z-test, and \u03c72 test, are obtained from theoretical probability distributions. Fisher's exact test is an example of a commonly used permutation test for evaluating the association between two dichotomous variables. When sample sizes are very large, the Pearson's chi-square test will give accurate results."}, {"text": "In contrast to permutation tests, the distributions underlying many popular \"classical\" statistical tests, such as the t-test, F-test, z-test, and \u03c72 test, are obtained from theoretical probability distributions. Fisher's exact test is an example of a commonly used permutation test for evaluating the association between two dichotomous variables. When sample sizes are very large, the Pearson's chi-square test will give accurate results."}, {"text": "In contrast to permutation tests, the distributions underlying many popular \"classical\" statistical tests, such as the t-test, F-test, z-test, and \u03c72 test, are obtained from theoretical probability distributions. Fisher's exact test is an example of a commonly used permutation test for evaluating the association between two dichotomous variables. When sample sizes are very large, the Pearson's chi-square test will give accurate results."}]}, {"question": "Does machine learning use neural networks", "positive_ctxs": [{"text": "Strictly speaking, a neural network (also called an \u201cartificial neural network\u201d) is a type of machine learning model that is usually used in supervised learning. A perceptron is a simplified model of a human neuron that accepts an input and performs a computation on that input."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.Deep-learning architectures such as deep neural networks, deep belief networks, recurrent neural networks and convolutional neural networks have been applied to fields including computer vision, machine vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design, medical image analysis, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.Artificial neural networks (ANNs) were inspired by information processing and distributed communication nodes in biological systems. ANNs have various differences from biological brains."}, {"text": "Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.Deep-learning architectures such as deep neural networks, deep belief networks, recurrent neural networks and convolutional neural networks have been applied to fields including computer vision, machine vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design, medical image analysis, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.Artificial neural networks (ANNs) were inspired by information processing and distributed communication nodes in biological systems. ANNs have various differences from biological brains."}, {"text": "Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.Deep-learning architectures such as deep neural networks, deep belief networks, recurrent neural networks and convolutional neural networks have been applied to fields including computer vision, machine vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design, medical image analysis, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.Artificial neural networks (ANNs) were inspired by information processing and distributed communication nodes in biological systems. ANNs have various differences from biological brains."}, {"text": "Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.Deep-learning architectures such as deep neural networks, deep belief networks, recurrent neural networks and convolutional neural networks have been applied to fields including computer vision, machine vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design, medical image analysis, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.Artificial neural networks (ANNs) were inspired by information processing and distributed communication nodes in biological systems. ANNs have various differences from biological brains."}, {"text": "Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.Deep-learning architectures such as deep neural networks, deep belief networks, recurrent neural networks and convolutional neural networks have been applied to fields including computer vision, machine vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design, medical image analysis, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.Artificial neural networks (ANNs) were inspired by information processing and distributed communication nodes in biological systems. ANNs have various differences from biological brains."}, {"text": "Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.Deep-learning architectures such as deep neural networks, deep belief networks, recurrent neural networks and convolutional neural networks have been applied to fields including computer vision, machine vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design, medical image analysis, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.Artificial neural networks (ANNs) were inspired by information processing and distributed communication nodes in biological systems. ANNs have various differences from biological brains."}, {"text": "Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.Deep-learning architectures such as deep neural networks, deep belief networks, recurrent neural networks and convolutional neural networks have been applied to fields including computer vision, machine vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design, medical image analysis, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.Artificial neural networks (ANNs) were inspired by information processing and distributed communication nodes in biological systems. ANNs have various differences from biological brains."}]}, {"question": "What words improve your confidence levels", "positive_ctxs": [{"text": "Here are 25 phases that you can use to increase confidence and self-esteem in your children.\u201cYou are capable.\" \u201cThat was brave.\" \u201cYou've got this.\" \u201cI believe in you.\" \u201cYou can do hard things.\" \u201cNo matter what happens, I love you.\" \u201cLet's try it together.\" \u201cHow'd you do that?\"More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Now, assume (for example) that there are 5 green and 45 red marbles in the urn. Standing next to the urn, you close your eyes and draw 10 marbles without replacement. What is the probability that exactly 4 of the 10 are green?"}, {"text": "An experiment examined the extent to which individuals could refute arguments that contradicted their personal beliefs. People with high confidence levels more readily seek out contradictory information to their personal position to form an argument. Individuals with low confidence levels do not seek out contradictory information and prefer information that supports their personal position."}, {"text": "An experiment examined the extent to which individuals could refute arguments that contradicted their personal beliefs. People with high confidence levels more readily seek out contradictory information to their personal position to form an argument. Individuals with low confidence levels do not seek out contradictory information and prefer information that supports their personal position."}, {"text": "Sometimes researchers talk about the confidence level \u03b3 = (1 \u2212 \u03b1) instead. This is the probability of not rejecting the null hypothesis given that it is true. Confidence levels and confidence intervals were introduced by Neyman in 1937."}, {"text": "Sometimes researchers talk about the confidence level \u03b3 = (1 \u2212 \u03b1) instead. This is the probability of not rejecting the null hypothesis given that it is true. Confidence levels and confidence intervals were introduced by Neyman in 1937."}, {"text": "Reliability engineering is used to design a realistic and affordable test program that provides empirical evidence that the system meets its reliability requirements. Statistical confidence levels are used to address some of these concerns. A certain parameter is expressed along with a corresponding confidence level: for example, an MTBF of 1000 hours at 90% confidence level."}, {"text": "People generate and evaluate evidence in arguments that are biased towards their own beliefs and opinions. Heightened confidence levels decrease preference for information that supports individuals' personal beliefs."}]}, {"question": "How do anchor boxes in object detection really work", "positive_ctxs": [{"text": "An object detector that uses anchor boxes can process an entire image at once, making real-time object detection systems possible. Because a convolutional neural network (CNN) can process an input image in a convolutional manner, a spatial location in the input can be related to a spatial location in the output."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Shopping centres with anchor stores have consistently outperformed those without one, as the anchor helps draw shoppers initially attracted to the anchor to shop at other shops in the mall. Thus, a mall which loses its last anchor is often considered to be a dead mall."}, {"text": "Viana, F.A.C., Simpson, T.W., Balabanov, V. and Toropov, V. \"Metamodeling in multidisciplinary design optimization: How far have we really come?\" AIAA Journal 52 (4) 670-690, 2014 (DOI: 10.2514/1.J052375)"}, {"text": "How high is the probability they really are drunk?Many would answer as high as 95%, but the correct probability is about 2%."}, {"text": "The software was \"robust enough to make identifications from less-than-perfect face views. It can also often see through such impediments to identification as mustaches, beards, changed hairstyles and glasses\u2014even sunglasses\".Real-time face detection in video footage became possible in 2001 with the Viola\u2013Jones object detection framework for faces. Paul Viola and Michael Jones combined their face detection method with the Haar-like feature approach to object recognition in digital images to launch AdaBoost, the first real-time frontal-view face detector."}, {"text": "The indices are for individual input vectors given as a triplet. The triplet is formed by drawing an anchor input, a positive input that describes the same entity as the anchor entity, and a negative input that does not describe the same entity as the anchor entity. These inputs are then run through the network, and the outputs are used in the loss function."}, {"text": "In this journal, authors proposed a new approach to use SIFT descriptors for multiple object detection purposes. The proposed multiple object detection approach is tested on aerial and satellite images.SIFT features can essentially be applied to any task that requires identification of matching locations between images. Work has been done on applications such as recognition of particular object categories in 2D images, 3D reconstruction,"}, {"text": "Early on, grocery stores were a common type of anchor store, since they are visited often. However, research on consumer behavior revealed that most trips to the grocery store did not result in visits to surrounding shops. Large supermarkets remain common anchor stores within power centers however."}]}, {"question": "How do you interpret the odds ratio in logistic regression", "positive_ctxs": [{"text": "To conclude, the important thing to remember about the odds ratio is that an odds ratio greater than 1 is a positive association (i.e., higher number for the predictor means group 1 in the outcome), and an odds ratio less than 1 is negative association (i.e., higher number for the predictor means group 0 in the outcome"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Due to the widespread use of logistic regression, the odds ratio is widely used in many fields of medical and social science research. The odds ratio is commonly used in survey research, in epidemiology, and to express the results of some clinical trials, such as in case-control studies. It is often abbreviated \"OR\" in reports."}, {"text": "In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient \u2013 the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t test."}, {"text": "In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient \u2013 the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t test."}, {"text": "In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient \u2013 the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t test."}, {"text": "The simplest measure of association for a 2 \u00d7 2 contingency table is the odds ratio. Given two events, A and B, the odds ratio is defined as the ratio of the odds of A in the presence of B and the odds of A in the absence of B, or equivalently (due to symmetry), the ratio of the odds of B in the presence of A and the odds of B in the absence of A. Two events are independent if and only if the odds ratio is 1; if the odds ratio is greater than 1, the events are positively associated; if the odds ratio is less than 1, the events are negatively associated."}, {"text": "The simplest measure of association for a 2 \u00d7 2 contingency table is the odds ratio. Given two events, A and B, the odds ratio is defined as the ratio of the odds of A in the presence of B and the odds of A in the absence of B, or equivalently (due to symmetry), the ratio of the odds of B in the presence of A and the odds of B in the absence of A. Two events are independent if and only if the odds ratio is 1; if the odds ratio is greater than 1, the events are positively associated; if the odds ratio is less than 1, the events are negatively associated."}, {"text": "The simplest measure of association for a 2 \u00d7 2 contingency table is the odds ratio. Given two events, A and B, the odds ratio is defined as the ratio of the odds of A in the presence of B and the odds of A in the absence of B, or equivalently (due to symmetry), the ratio of the odds of B in the presence of A and the odds of B in the absence of A. Two events are independent if and only if the odds ratio is 1; if the odds ratio is greater than 1, the events are positively associated; if the odds ratio is less than 1, the events are negatively associated."}]}, {"question": "How does multinomial naive Bayes work", "positive_ctxs": [{"text": "Multinomial Na\u00efve Bayes uses term frequency i.e. the number of times a given term appears in a document. After normalization, term frequency can be used to compute maximum likelihood estimates based on the training data to estimate the conditional probability."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For some types of probability models, naive Bayes classifiers can be trained very efficiently in a supervised learning setting. In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the naive Bayes model without accepting Bayesian probability or using any Bayesian methods."}, {"text": "For some types of probability models, naive Bayes classifiers can be trained very efficiently in a supervised learning setting. In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the naive Bayes model without accepting Bayesian probability or using any Bayesian methods."}, {"text": "For some types of probability models, naive Bayes classifiers can be trained very efficiently in a supervised learning setting. In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the naive Bayes model without accepting Bayesian probability or using any Bayesian methods."}, {"text": "For some types of probability models, naive Bayes classifiers can be trained very efficiently in a supervised learning setting. In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the naive Bayes model without accepting Bayesian probability or using any Bayesian methods."}, {"text": "This event model is especially popular for classifying short texts. It has the benefit of explicitly modelling the absence of terms. Note that a naive Bayes classifier with a Bernoulli event model is not the same as a multinomial NB classifier with frequency counts truncated to one."}, {"text": "This event model is especially popular for classifying short texts. It has the benefit of explicitly modelling the absence of terms. Note that a naive Bayes classifier with a Bernoulli event model is not the same as a multinomial NB classifier with frequency counts truncated to one."}, {"text": "This event model is especially popular for classifying short texts. It has the benefit of explicitly modelling the absence of terms. Note that a naive Bayes classifier with a Bernoulli event model is not the same as a multinomial NB classifier with frequency counts truncated to one."}]}, {"question": "Why is energy quantized", "positive_ctxs": [{"text": "Energy is quantized in some systems, meaning that the system can have only certain energies and not a continuum of energies, unlike the classical case. This would be like having only certain speeds at which a car can travel because its kinetic energy can have only certain values."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Even though the quantized massless \u03c64 is not scale-invariant, there do exist scale-invariant quantized scalar field theories other than the Gaussian fixed point. One example is the Wilson-Fisher fixed point, below."}, {"text": "The concept of entropy can be described qualitatively as a measure of energy dispersal at a specific temperature. Similar terms have been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or \"spreading\" of the total energy of each constituent of a system over its particular quantized energy levels."}, {"text": "However, even though the classical massless \u03c64 theory is scale-invariant in D=4, the quantized version is not scale-invariant. We can see this from the beta-function for the coupling parameter, g."}, {"text": "The central bin is not divided in angular directions. The gradient orientations are quantized in 16 bins resulting in 272-bin histogram. The size of this descriptor is reduced with PCA."}, {"text": "This tells us that the electric charge (which is the coupling parameter in the theory) increases with increasing energy. Therefore, while the quantized electromagnetic field without charged particles is scale-invariant, QED is not scale-invariant."}, {"text": "Free, massless quantized scalar field theory has no coupling parameters. Therefore, like the classical version, it is scale-invariant. In the language of the renormalization group, this theory is known as the Gaussian fixed point."}, {"text": "A simple example of a scale-invariant QFT is the quantized electromagnetic field without charged particles. This theory actually has no coupling parameters (since photons are massless and non-interacting) and is therefore scale-invariant, much like the classical theory."}]}, {"question": "How do you find the continuous probability of a uniform", "positive_ctxs": [{"text": "The More Formal Formula You can solve these types of problems using the steps above, or you can us the formula for finding the probability for a continuous uniform distribution: P(X) = d \u2013 c / b \u2013 a. This is also sometimes written as: P(X) = x2 \u2013 x1 / b \u2013 a."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "In physics, if you observe a gas at a fixed temperature and pressure in a uniform gravitational field, the heights of the various molecules also follow an approximate exponential distribution, known as the Barometric formula. This is a consequence of the entropy property mentioned below."}, {"text": "Given a set of data that contains information on medical patients your goal is to find correlation for a disease. Before you can start iterating through the data ensure that you have an understanding of the result, are you looking for patients who have the disease? Are there other diseases that can be the cause?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}]}, {"question": "Why is sigmoid nonlinear", "positive_ctxs": [{"text": "The use of sigmoidal nonlinear functions was inspired by the ouputs of biological neurons. However, this function is not smooth (it fails to be differential at the threshold value). Therefore, the sigmoid class of functions is a differentiable alternative that still captures much of the behavior of biological neurons."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A sigmoid function is a mathematical function having a characteristic \"S\"-shaped curve or sigmoid curve. A common example of a sigmoid function is the logistic function shown in the first figure and defined by the formula:"}, {"text": "A sigmoid function is a mathematical function having a characteristic \"S\"-shaped curve or sigmoid curve. A common example of a sigmoid function is the logistic function shown in the first figure and defined by the formula:"}, {"text": "A sigmoid function is a mathematical function having a characteristic \"S\"-shaped curve or sigmoid curve. A common example of a sigmoid function is the logistic function shown in the first figure and defined by the formula:"}, {"text": "A sigmoid function is a bounded, differentiable, real function that is defined for all real input values and has a non-negative derivative at each point and exactly one inflection point. A sigmoid \"function\" and a sigmoid \"curve\" refer to the same object."}, {"text": "A sigmoid function is a bounded, differentiable, real function that is defined for all real input values and has a non-negative derivative at each point and exactly one inflection point. A sigmoid \"function\" and a sigmoid \"curve\" refer to the same object."}, {"text": "A sigmoid function is a bounded, differentiable, real function that is defined for all real input values and has a non-negative derivative at each point and exactly one inflection point. A sigmoid \"function\" and a sigmoid \"curve\" refer to the same object."}, {"text": "Akaike information criterion (AIC) method of model selection, and a comparison with MML: Dowe, D.L. ; Gardner, S.; Oppy, G. (Dec 2007). Why Simplicity is no Problem for Bayesians\"."}]}, {"question": "How do you find the difference between two categorical variables", "positive_ctxs": [{"text": "Chi-square Test. The Pearson's \u03c72 test (after Karl Pearson, 1900) is the most commonly used test for the difference in distribution of categorical variables between two or more independent groups."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The power of the test is the probability that the test will find a statistically significant difference between men and women, as a function of the size of the true difference between those two populations."}, {"text": "Linear trends are also used to find associations between ordinal data and other categorical variables, normally in a contingency tables. A correlation r is found between the variables where r lies between -1 and 1. To test the trend, a test statistic:"}, {"text": "An interaction may arise when considering the relationship among three or more variables, and describes a situation in which the simultaneous influence of two variables on a third is not additive. Interactions may arise with categorical variables in two ways: either categorical by categorical variable interactions, or categorical by continuous variable interactions."}, {"text": "An interaction may arise when considering the relationship among three or more variables, and describes a situation in which the simultaneous influence of two variables on a third is not additive. Interactions may arise with categorical variables in two ways: either categorical by categorical variable interactions, or categorical by continuous variable interactions."}, {"text": "An interaction may arise when considering the relationship among three or more variables, and describes a situation in which the simultaneous influence of two variables on a third is not additive. Interactions may arise with categorical variables in two ways: either categorical by categorical variable interactions, or categorical by continuous variable interactions."}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}]}, {"question": "Why is regularization used", "positive_ctxs": [{"text": "Regularization is a technique used for tuning the function by adding an additional penalty term in the error function. The additional term controls the excessively fluctuating function such that the coefficients don't take extreme values."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Akaike information criterion (AIC) method of model selection, and a comparison with MML: Dowe, D.L. ; Gardner, S.; Oppy, G. (Dec 2007). Why Simplicity is no Problem for Bayesians\"."}, {"text": "In the field of statistical learning theory, matrix regularization generalizes notions of vector regularization to cases where the object to be learned is a matrix. The purpose of regularization is to enforce conditions, for example sparsity or smoothness, that can produce stable predictive functions. For example, in the more common vector framework, Tikhonov regularization optimizes over"}, {"text": "An interesting fact is that the original wiki software was created in 1995, but it took at least another six years for large wiki-based collaborative projects to appear. Why did it take so long? One explanation is that the original wiki software lacked a selection operation and hence couldn't effectively support content evolution."}, {"text": "Validation datasets can be used for regularization by early stopping (stopping training when the error on the validation dataset increases, as this is a sign of overfitting to the training dataset)."}, {"text": "Validation datasets can be used for regularization by early stopping (stopping training when the error on the validation dataset increases, as this is a sign of overfitting to the training dataset)."}, {"text": "Validation datasets can be used for regularization by early stopping (stopping training when the error on the validation dataset increases, as this is a sign of overfitting to the training dataset)."}, {"text": "Validation datasets can be used for regularization by early stopping (stopping training when the error on the validation dataset increases, as this is a sign of overfitting to the training dataset)."}]}, {"question": "How are histograms used in photography", "positive_ctxs": [{"text": "2:1510:12Suggested clip \u00b7 108 secondsHistograms In Photography - YouTubeYouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In June 2017, Taye Diggs, Lucy Liu, and Joan Smalls joined the cast of the film. Principal photography began in June 2017 in New York City."}, {"text": "Pyramid match kernel is a fast algorithm (linear complexity instead of classic one in quadratic complexity) kernel function (satisfying Mercer's condition) which maps the BoW features, or set of features in high dimension, to multi-dimensional multi-resolution histograms. An advantage of these multi-resolution histograms is their ability to capture co-occurring features. The pyramid match kernel builds multi-resolution histograms by binning data points into discrete regions of increasing size."}, {"text": "First a set of orientation histograms is created on 4\u00d74 pixel neighborhoods with 8 bins each. These histograms are computed from magnitude and orientation values of samples in a 16\u00d716 region around the keypoint such that each histogram contains samples from a 4\u00d74 subregion of the original neighborhood region. The image gradient magnitudes and orientations are sampled around the keypoint location, using the scale of the keypoint to select the level of Gaussian blur for the image."}, {"text": "Leila Schneps and Coralie Colmez, Math on trial. How numbers get used and abused in the courtroom, Basic Books, 2013. (Sixth chapter: \"Math error number 6: Simpson's paradox."}, {"text": "Leila Schneps and Coralie Colmez, Math on trial. How numbers get used and abused in the courtroom, Basic Books, 2013. (First chapter: \"Math error number 1: multiplying non-independent probabilities."}, {"text": "The collinearity equations are a set of two equations, used in photogrammetry and computer stereo vision, to relate coordinates in an image (sensor) plane (in two dimensions) to object coordinates (in three dimensions). In the photography setting, the equations are derived by considering the central projection of a point of the object through the optical centre of the camera to the image in the image (sensor) plane. The three points, object point, image point and optical centre, are always collinear."}, {"text": "Among the approaches that are used to feature description, one can mention N-jets and local histograms (see scale-invariant feature transform for one example of a local histogram descriptor). In addition to such attribute information, the feature detection step by itself may also provide complementary attributes, such as the edge orientation and gradient magnitude in edge detection and the polarity and the strength of the blob in blob detection."}]}, {"question": "What is curse of dimensionality in Knn", "positive_ctxs": [{"text": "Abstract: The dimensionality curse phenomenon states that in high dimensional spaces distances between nearest and farthest points from query points become almost equal. Therefore, nearest neighbor calculations cannot discriminate candidate points."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Another possibility is the randomized setting. For some problems we can break the curse of dimensionality by weakening the assurance; for others, we cannot. There is a large IBC literature on results in various settings; see Where to Learn More below."}, {"text": "A typical rule of thumb is that there should be at least 5 training examples for each dimension in the representation. In machine learning and insofar as predictive performance is concerned, the curse of dimensionality is used interchangeably with the peaking phenomenon, which is also known as Hughes phenomenon. This phenomenon states that with a fixed number of training samples, the average (expected) predictive power of a classifier or regressor first increases as the number of dimensions or features used is increased but beyond a certain dimensionality it starts deteriorating instead of improving steadily.Nevertheless, in the context of a simple classifier (linear discriminant analysis in the multivariate Gaussian model under the assumption of a common known covariance matrix) Zollanvari et al."}, {"text": "The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The expression was coined by Richard E. Bellman when considering problems in dynamic programming.Dimensionally cursed phenomena occur in domains such as numerical analysis, sampling, combinatorics, machine learning, data mining and databases. The common theme of these problems is that when the dimensionality increases, the volume of the space increases so fast that the available data become sparse."}, {"text": "noted that while the typical formalizations of the curse of dimensionality affect i.i.d. data, having data that is separated in each attribute becomes easier even in high dimensions, and argued that the signal-to-noise ratio matters: data becomes easier with each attribute that adds signal, and harder with attributes that only add noise (irrelevant error) to the data. In particular for unsupervised data analysis this effect is known as swamping."}, {"text": "Geometric anomalities in high dimension lead to the well-known curse of dimensionality. Nevertheless, proper utilization of concentration of measure phenomena can make computation easier. An important case of these blessing of dimensionality phenomena was highlighted by Donoho and Tanner: if a sample is essentially high-dimensional then each point can be separated from the rest of the sample by linear inequality, with high probability, even for exponentially large samples."}, {"text": "Geometric anomalities in high dimension lead to the well-known curse of dimensionality. Nevertheless, proper utilization of concentration of measure phenomena can make computation easier. An important case of these blessing of dimensionality phenomena was highlighted by Donoho and Tanner: if a sample is essentially high-dimensional then each point can be separated from the rest of the sample by linear inequality, with high probability, even for exponentially large samples."}, {"text": "Geometric anomalities in high dimension lead to the well-known curse of dimensionality. Nevertheless, proper utilization of concentration of measure phenomena can make computation easier. An important case of these blessing of dimensionality phenomena was highlighted by Donoho and Tanner: if a sample is essentially high-dimensional then each point can be separated from the rest of the sample by linear inequality, with high probability, even for exponentially large samples."}]}, {"question": "How do you impute missing values", "positive_ctxs": [{"text": "The following are common methods:Mean imputation. Simply calculate the mean of the observed values for that variable for all individuals who are non-missing. Substitution. Hot deck imputation. Cold deck imputation. Regression imputation. Stochastic regression imputation. Interpolation and extrapolation."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Fitted values from the regression model are then used to impute the missing values. The problem is that the imputed data do not have an error term included in their estimation, thus the estimates fit perfectly along the regression line without any residual variance. This causes relationships to be over identified and suggest greater precision in the imputed values than is warranted."}, {"text": "Fitted values from the regression model are then used to impute the missing values. The problem is that the imputed data do not have an error term included in their estimation, thus the estimates fit perfectly along the regression line without any residual variance. This causes relationships to be over identified and suggest greater precision in the imputed values than is warranted."}, {"text": "One form of hot-deck imputation is called \"last observation carried forward\" (or LOCF for short), which involves sorting a dataset according to any of a number of variables, thus creating an ordered dataset. The technique then finds the first missing value and uses the cell value immediately prior to the data that are missing to impute the missing value. The process is repeated for the next cell with a missing value until all missing values have been imputed."}, {"text": "One form of hot-deck imputation is called \"last observation carried forward\" (or LOCF for short), which involves sorting a dataset according to any of a number of variables, thus creating an ordered dataset. The technique then finds the first missing value and uses the cell value immediately prior to the data that are missing to impute the missing value. The process is repeated for the next cell with a missing value until all missing values have been imputed."}, {"text": "To impute missing data in statistics, NMF can take missing data while minimizing its cost function, rather than treating these missing data as zeros. This makes it a mathematically proven method for data imputation in statistics. By first proving that the missing data are ignored in the cost function, then proving that the impact from missing data can be as small as a second order effect, Ren et al."}, {"text": "To impute missing data in statistics, NMF can take missing data while minimizing its cost function, rather than treating these missing data as zeros. This makes it a mathematically proven method for data imputation in statistics. By first proving that the missing data are ignored in the cost function, then proving that the impact from missing data can be as small as a second order effect, Ren et al."}]}, {"question": "What is the difference between boosting and bagging", "positive_ctxs": [{"text": "Bagging is a way to decrease the variance in the prediction by generating additional data for training from dataset using combinations with repetitions to produce multi-sets of the original data. Boosting is an iterative technique which adjusts the weight of an observation based on the last classification."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In psychophysical terms, the size difference between A and C is above the just noticeable difference ('jnd') while the size differences between A and B and B and C are below the jnd."}, {"text": "Only algorithms that are provable boosting algorithms in the probably approximately correct learning formulation can accurately be called boosting algorithms. Other algorithms that are similar in spirit to boosting algorithms are sometimes called \"leveraging algorithms\", although they are also sometimes incorrectly called boosting algorithms.The main variation between many boosting algorithms is their method of weighting training data points and hypotheses. AdaBoost is very popular and the most significant historically as it was the first algorithm that could adapt to the weak learners."}, {"text": "Only algorithms that are provable boosting algorithms in the probably approximately correct learning formulation can accurately be called boosting algorithms. Other algorithms that are similar in spirit to boosting algorithms are sometimes called \"leveraging algorithms\", although they are also sometimes incorrectly called boosting algorithms.The main variation between many boosting algorithms is their method of weighting training data points and hypotheses. AdaBoost is very popular and the most significant historically as it was the first algorithm that could adapt to the weak learners."}, {"text": "Only algorithms that are provable boosting algorithms in the probably approximately correct learning formulation can accurately be called boosting algorithms. Other algorithms that are similar in spirit to boosting algorithms are sometimes called \"leveraging algorithms\", although they are also sometimes incorrectly called boosting algorithms.The main variation between many boosting algorithms is their method of weighting training data points and hypotheses. AdaBoost is very popular and the most significant historically as it was the first algorithm that could adapt to the weak learners."}, {"text": "Only algorithms that are provable boosting algorithms in the probably approximately correct learning formulation can accurately be called boosting algorithms. Other algorithms that are similar in spirit to boosting algorithms are sometimes called \"leveraging algorithms\", although they are also sometimes incorrectly called boosting algorithms.The main variation between many boosting algorithms is their method of weighting training data points and hypotheses. AdaBoost is very popular and the most significant historically as it was the first algorithm that could adapt to the weak learners."}, {"text": "Only algorithms that are provable boosting algorithms in the probably approximately correct learning formulation can accurately be called boosting algorithms. Other algorithms that are similar in spirit to boosting algorithms are sometimes called \"leveraging algorithms\", although they are also sometimes incorrectly called boosting algorithms.The main variation between many boosting algorithms is their method of weighting training data points and hypotheses. AdaBoost is very popular and the most significant historically as it was the first algorithm that could adapt to the weak learners."}, {"text": "Only algorithms that are provable boosting algorithms in the probably approximately correct learning formulation can accurately be called boosting algorithms. Other algorithms that are similar in spirit to boosting algorithms are sometimes called \"leveraging algorithms\", although they are also sometimes incorrectly called boosting algorithms.The main variation between many boosting algorithms is their method of weighting training data points and hypotheses. AdaBoost is very popular and the most significant historically as it was the first algorithm that could adapt to the weak learners."}]}, {"question": "Is a way of finding the K value for K means clustering", "positive_ctxs": [{"text": "There is a popular method known as elbow method which is used to determine the optimal value of K to perform the K-Means Clustering Algorithm. The basic idea behind this method is that it plots the various values of cost with changing k. As the value of K increases, there will be fewer elements in the cluster."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "An a priori probability is a probability that is derived purely by deductive reasoning. One way of deriving a priori probabilities is the principle of indifference, which has the character of saying that, if there are N mutually exclusive and collectively exhaustive events and if they are equally likely, then the probability of a given event occurring is 1/N. Similarly the probability of one of a given collection of K events is K / N."}, {"text": "To this data, one fits a length-p coefficient vector w and a set of thresholds \u03b81, ..., \u03b8K\u22121 with the property that \u03b81 < \u03b82 < ... < \u03b8K\u22121. This set of thresholds divides the real number line into K disjoint segments, corresponding to the K response levels."}, {"text": "It is also possible to show the non-computability of K by reduction from the non-computability of the halting problem H, since K and H are Turing-equivalent.There is a corollary, humorously called the \"full employment theorem\" in the programming language community, stating that there is no perfect size-optimizing compiler."}, {"text": "Unit conversion for temperature differences is simply a matter of multiplying by, e.g., 1 \u00b0F / 1 K (although the ratio is not a constant value). But because some of these scales have origins that do not correspond to absolute zero, conversion from one temperature scale to another requires accounting for that. As a result, simple dimensional analysis can lead to errors if it is ambiguous whether 1 K means the absolute temperature equal to \u2212272.15 \u00b0C, or the temperature difference equal to 1 \u00b0C."}, {"text": "Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in"}, {"text": "Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in"}, {"text": "Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in"}]}, {"question": "Can an estimator be unbiased or inconsistent", "positive_ctxs": [{"text": "Say we want to estimate the mean of a population. While the most used estimator is the average of the sample, another possible estimator is simply the first number drawn from the sample. In theory, you could have an unbiased estimator whose variance is asymptotically nonzero, and that would be inconsistent."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as in unbiased estimation of standard deviation); because an estimator is median-unbiased but not mean-unbiased (or the reverse); because a biased estimator gives a lower value of some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful."}, {"text": "All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as in unbiased estimation of standard deviation); because an estimator is median-unbiased but not mean-unbiased (or the reverse); because a biased estimator gives a lower value of some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful."}, {"text": "Among unbiased estimators, there often exists one with the lowest variance, called the minimum variance unbiased estimator (MVUE). In some cases an unbiased efficient estimator exists, which, in addition to having the lowest variance among unbiased estimators, satisfies the Cram\u00e9r\u2013Rao bound, which is an absolute lower bound on variance for statistics of a variable."}, {"text": "is uniformly minimum variance unbiased (UMVU), which makes it the \"best\" estimator among all unbiased ones. However it can be shown that the biased estimator"}, {"text": "is uniformly minimum variance unbiased (UMVU), which makes it the \"best\" estimator among all unbiased ones. However it can be shown that the biased estimator"}, {"text": "is uniformly minimum variance unbiased (UMVU), which makes it the \"best\" estimator among all unbiased ones. However it can be shown that the biased estimator"}, {"text": "is uniformly minimum variance unbiased (UMVU), which makes it the \"best\" estimator among all unbiased ones. However it can be shown that the biased estimator"}]}, {"question": "What is a normal score in statistics", "positive_ctxs": [{"text": "The term normal score is used with two different meanings in statistics. A given data point is assigned a value which is either exactly, or an approximation, to the expectation of the order statistic of the same rank in a sample of standard normal random variables of the same size as the observed data set."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": ", so that, in general, it is not a statistic. However, in certain applications, such as the score test, the score is evaluated at a specific value of"}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}]}, {"question": "How can you improve the generalization of the deep learning model", "positive_ctxs": [{"text": "You can use a generative model. You can also use simple tricks. For example, with photograph image data, you can get big gains by randomly shifting and rotating existing images. It improves the generalization of the model to such transforms in the data if they are to be expected in new data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}]}, {"question": "What is activation function used in a neural network", "positive_ctxs": [{"text": "In artificial neural networks, the activation function of a node defines the output of that node given an input or set of inputs. A standard integrated circuit can be seen as a digital network of activation functions that can be \"ON\" (1) or \"OFF\" (0), depending on input."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The softmax function, also known as softargmax or normalized exponential function, is a generalization of the logistic function to multiple dimensions. It is used in multinomial logistic regression and is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom."}, {"text": "The softmax function, also known as softargmax or normalized exponential function, is a generalization of the logistic function to multiple dimensions. It is used in multinomial logistic regression and is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom."}, {"text": "When the activation function is non-linear, then a two-layer neural network can be proven to be a universal function approximator. This is known as the Universal Approximation Theorem. The identity activation function does not satisfy this property."}, {"text": "The most basic model of a neuron consists of an input with some synaptic weight vector and an activation function or transfer function inside the neuron determining output. This is the basic structure used for artificial neurons, which in a neural network often looks like"}, {"text": "where f(X) is an analytic function in X. With this choice, the single-layer neural network is identical to the logistic regression model. This function has a continuous derivative, which allows it to be used in backpropagation."}, {"text": "where f(X) is an analytic function in X. With this choice, the single-layer neural network is identical to the logistic regression model. This function has a continuous derivative, which allows it to be used in backpropagation."}, {"text": "where f(X) is an analytic function in X. With this choice, the single-layer neural network is identical to the logistic regression model. This function has a continuous derivative, which allows it to be used in backpropagation."}]}, {"question": "What is unsupervised anomaly detection", "positive_ctxs": [{"text": "The objective of Unsupervised Anomaly Detection is to detect previously unseen rare objects or events without any prior knowledge about these. The only information available is that the percentage of anomalies in the dataset is small, usually less than 1%."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, LSTM is applicable to tasks such as unsegmented, connected handwriting recognition, speech recognition and anomaly detection in network traffic or IDSs (intrusion detection systems)."}, {"text": "This pattern does not adhere to the common statistical definition of an outlier as a rare object, and many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless it has been aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns.Three broad categories of anomaly detection techniques exist. Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit least to the remainder of the data set."}, {"text": "This pattern does not adhere to the common statistical definition of an outlier as a rare object, and many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless it has been aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns.Three broad categories of anomaly detection techniques exist. Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit least to the remainder of the data set."}, {"text": "This pattern does not adhere to the common statistical definition of an outlier as a rare object, and many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless it has been aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns.Three broad categories of anomaly detection techniques exist. Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit least to the remainder of the data set."}, {"text": "This pattern does not adhere to the common statistical definition of an outlier as a rare object, and many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless it has been aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns.Three broad categories of anomaly detection techniques exist. Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit least to the remainder of the data set."}, {"text": "This pattern does not adhere to the common statistical definition of an outlier as a rare object, and many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless it has been aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns.Three broad categories of anomaly detection techniques exist. Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit least to the remainder of the data set."}, {"text": "This pattern does not adhere to the common statistical definition of an outlier as a rare object, and many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless it has been aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns.Three broad categories of anomaly detection techniques exist. Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit least to the remainder of the data set."}]}, {"question": "How is analysis of variance calculated", "positive_ctxs": [{"text": "Steps for Using ANOVAStep 1: Compute the Variance Between. First, the sum of squares (SS) between is computed: Step 2: Compute the Variance Within. Again, first compute the sum of squares within. Step 3: Compute the Ratio of Variance Between and Variance Within. This is called the F-ratio."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Pseudoreplication is a technical error associated with analysis of variance. Complexity hides the fact that statistical analysis is being attempted on a single sample (N=1). For this degenerate case the variance cannot be calculated (division by zero)."}, {"text": "For each of the four basic areas surrounding a pixel, the mean and variance are calculated. Then, the window size of each of the four basic areas is increased by 1. If the variance of a new window is smaller than before the resizing of the filter window, then the mean and variance of the basic area will take the newly calculated values."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified."}, {"text": "When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified."}, {"text": "When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified."}]}, {"question": "When should you use logistic regression", "positive_ctxs": [{"text": "Use simple logistic regression when you have one nominal variable and one measurement variable, and you want to know whether variation in the measurement variable causes variation in the nominal variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Conditional logistic regression is more general than the CMH test as it can handle continuous variable and perform multivariate analysis. When the CMH test can be applied, the CMH test statistic and the score test statistic of the conditional logistic regression are identical."}, {"text": "Conditional logistic regression is more general than the CMH test as it can handle continuous variable and perform multivariate analysis. When the CMH test can be applied, the CMH test statistic and the score test statistic of the conditional logistic regression are identical."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Try seeing what happens if you use independent subsets of your data for estimation and apply those estimates to the whole data set. Theoretically you should obtain somewhat higher variance from the smaller datasets used for estimation, but the expectation of the coefficient values should be the same. Naturally, the observed coefficient values will vary, but look at how much they vary."}, {"text": "When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel), logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space."}, {"text": "When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel), logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space."}]}, {"question": "What is difference between binomial and multinomial distribution", "positive_ctxs": [{"text": "A multinomial experiment is almost identical with one main difference: a binomial experiment can have two outcomes, while a multinomial experiment can have multiple outcomes. A binomial experiment will have a binomial distribution."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The beta distribution is conjugate to the binomial and Bernoulli distributions in exactly the same way as the Dirichlet distribution is conjugate to the multinomial distribution and categorical distribution."}, {"text": "When k is 2 and n is 1, the multinomial distribution is the Bernoulli distribution. When k is 2 and n is bigger than 1, it is the binomial distribution. When k is bigger than 2 and n is 1, it is the categorical distribution."}, {"text": "The goal of equivalence testing is to establish the agreement between a theoretical multinomial distribution and observed counting frequencies. The theoretical distribution may be a fully specified multinomial distribution or a parametric family of multinomial distributions."}, {"text": "The distribution of N thus is the binomial distribution with parameters n and p, where p = 1/2. The mean of the binomial distribution is n/2, and the variance is n/4. This distribution function will be denoted by N(d)."}, {"text": "The model of an urn with green and red marbles can be extended to the case where there are more than two colors of marbles. If there are Ki marbles of color i in the urn and you take n marbles at random without replacement, then the number of marbles of each color in the sample (k1, k2,..., kc) has the multivariate hypergeometric distribution. This has the same relationship to the multinomial distribution that the hypergeometric distribution has to the binomial distribution\u2014the multinomial distribution is the \"with-replacement\" distribution and the multivariate hypergeometric is the \"without-replacement\" distribution."}, {"text": "Because the square of a standard normal distribution is the chi-square distribution with one degree of freedom, the probability of a result such as 1 heads in 10 trials can be approximated either by using the normal distribution directly, or the chi-square distribution for the normalised, squared difference between observed and expected value. However, many problems involve more than the two possible outcomes of a binomial, and instead require 3 or more categories, which leads to the multinomial distribution. Just as de Moivre and Laplace sought for and found the normal approximation to the binomial, Pearson sought for and found a degenerate multivariate normal approximation to the multinomial distribution (the numbers in each category add up to the total sample size, which is considered fixed)."}, {"text": "The difference between the multinomial logit model and numerous other methods, models, algorithms, etc. with the same basic setup (the perceptron algorithm, support vector machines, linear discriminant analysis, etc.) is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}]}, {"question": "What are regression models used for", "positive_ctxs": [{"text": "Use regression analysis to describe the relationships between a set of independent variables and the dependent variable. Regression analysis produces a regression equation where the coefficients represent the relationship between each independent variable and the dependent variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The distribution of the residuals largely depends on the type and distribution of the outcome variable; different types of outcome variables lead to the variety of models within the GLiM family. Commonly used models in the GLiM family include binary logistic regression for binary or dichotomous outcomes, Poisson regression for count outcomes, and linear regression for continuous, normally distributed outcomes. This means that GLiM may be spoken of as a general family of statistical models or as specific models for specific outcome types."}, {"text": "The distribution of the residuals largely depends on the type and distribution of the outcome variable; different types of outcome variables lead to the variety of models within the GLiM family. Commonly used models in the GLiM family include binary logistic regression for binary or dichotomous outcomes, Poisson regression for count outcomes, and linear regression for continuous, normally distributed outcomes. This means that GLiM may be spoken of as a general family of statistical models or as specific models for specific outcome types."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Linear regression was the first type of regression analysis to be studied rigorously, and to be used extensively in practical applications. This is because models which depend linearly on their unknown parameters are easier to fit than models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine."}, {"text": "Linear regression was the first type of regression analysis to be studied rigorously, and to be used extensively in practical applications. This is because models which depend linearly on their unknown parameters are easier to fit than models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine."}, {"text": "Linear regression was the first type of regression analysis to be studied rigorously, and to be used extensively in practical applications. This is because models which depend linearly on their unknown parameters are easier to fit than models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine."}, {"text": "Linear regression was the first type of regression analysis to be studied rigorously, and to be used extensively in practical applications. This is because models which depend linearly on their unknown parameters are easier to fit than models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine."}]}, {"question": "Why is skewness important in statistics", "positive_ctxs": [{"text": "The primary reason skew is important is that analysis based on normal distributions incorrectly estimates expected returns and risk. Knowing that the market has a 70% probability of going up and a 30% probability of going down may appear helpful if you rely on normal distributions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "where \u03bc is the mean, \u03c3 is the standard deviation, E is the expectation operator, \u03bc3 is the third central moment, and \u03bat are the t-th cumulants. It is sometimes referred to as Pearson's moment coefficient of skewness, or simply the moment coefficient of skewness, but should not be confused with Pearson's other skewness statistics (see below). The last equality expresses skewness in terms of the ratio of the third cumulant \u03ba3 to the 1.5th power of the second cumulant \u03ba2."}, {"text": "where \u03bc is the mean, \u03c3 is the standard deviation, E is the expectation operator, \u03bc3 is the third central moment, and \u03bat are the t-th cumulants. It is sometimes referred to as Pearson's moment coefficient of skewness, or simply the moment coefficient of skewness, but should not be confused with Pearson's other skewness statistics (see below). The last equality expresses skewness in terms of the ratio of the third cumulant \u03ba3 to the 1.5th power of the second cumulant \u03ba2."}, {"text": "The kurtosis in both these cases is 1. Since they are both symmetrical their skewness is 0 and the difference is 1."}, {"text": "A value of skewness equal to zero does not imply that the probability distribution is symmetric. Thus there is a need for another measure of asymmetry that has this property: such a measure was introduced in 2000. It is called distance skewness and denoted by dSkew."}, {"text": "A value of skewness equal to zero does not imply that the probability distribution is symmetric. Thus there is a need for another measure of asymmetry that has this property: such a measure was introduced in 2000. It is called distance skewness and denoted by dSkew."}, {"text": "Note, however, that the converse is not true in general, i.e. zero skewness does not imply that the mean is equal to the median."}, {"text": "Note, however, that the converse is not true in general, i.e. zero skewness does not imply that the mean is equal to the median."}]}, {"question": "What loss function will you use to measure multi label problems", "positive_ctxs": [{"text": "What you want is multi-label classification, so you will use Binary Cross-Entropy Loss or Sigmoid Cross-Entropy loss. It is a Sigmoid activation plus a Cross-Entropy loss."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "You are allowed to select k of these n boxes all at once and break them open simultaneously, gaining access to k keys. What is the probability that using these keys you can open all n boxes, where you use a found key to open the box it belongs to and repeat."}, {"text": "Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples)."}, {"text": "Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples)."}, {"text": "Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples)."}, {"text": "Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples)."}, {"text": "Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples)."}]}, {"question": "What is acceptable inter rater reliability", "positive_ctxs": [{"text": "According to Cohen's original article, values \u2264 0 as indicating no agreement and 0.01\u20130.20 as none to slight, 0.21\u20130.40 as fair, 0.41\u2013 0.60 as moderate, 0.61\u20130.80 as substantial, and 0.81\u20131.00 as almost perfect agreement."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Test-retest reliability assesses the degree to which test scores are consistent from one test administration to the next. Measurements are gathered from a single rater who uses the same methods or instruments and the same testing conditions."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "It is common to make decisions under uncertainty. What can be done to make good (or at least the best possible) decisions under conditions of uncertainty? Info-gap robustness analysis evaluates each feasible decision by asking: how much deviation from an estimate of a parameter value, function, or set, is permitted and yet \"guarantee\" acceptable performance?"}, {"text": "Test the results for reliability and validity \u2013 Compute R-squared to determine what proportion of variance of the scaled data can be accounted for by the MDS procedure. An R-square of 0.6 is considered the minimum acceptable level. An R-square of 0.8 is considered good for metric scaling and .9 is considered good for non-metric scaling."}, {"text": "In a de minimis definition, severity of failures includes the cost of spare parts, man-hours, logistics, damage (secondary failures), and downtime of machines which may cause production loss. A more complete definition of failure also can mean injury, dismemberment, and death of people within the system (witness mine accidents, industrial accidents, space shuttle failures) and the same to innocent bystanders (witness the citizenry of cities like Bhopal, Love Canal, Chernobyl, or Sendai, and other victims of the 2011 T\u014dhoku earthquake and tsunami)\u2014in this case, reliability engineering becomes system safety. What is acceptable is determined by the managing authority or customers or the affected communities."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Before the test is actually performed, the maximum acceptable probability of a Type I error (\u03b1) is determined. Typically, values in the range of 1% to 5% are selected. (If the maximum acceptable error rate is zero, an infinite number of correct guesses is required.)"}]}, {"question": "What is Gan in deep learning", "positive_ctxs": [{"text": "Generative adversarial networks (GANs) are an exciting recent innovation in machine learning. GANs are generative models: they create new data instances that resemble your training data. For example, GANs can create images that look like photographs of human faces, even though the faces don't belong to any real person."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}, {"text": "The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras."}]}, {"question": "Why it is beneficial to use pre trained models", "positive_ctxs": [{"text": "Models that are pre-trained on ImageNet are good at detecting high-level features like edges, patterns, etc. These models understand certain feature representations, which can be reused."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "An ensemble is itself a supervised learning algorithm, because it can be trained and then used to make predictions. The trained ensemble, therefore, represents a single hypothesis. This hypothesis, however, is not necessarily contained within the hypothesis space of the models from which it is built."}, {"text": "An ensemble is itself a supervised learning algorithm, because it can be trained and then used to make predictions. The trained ensemble, therefore, represents a single hypothesis. This hypothesis, however, is not necessarily contained within the hypothesis space of the models from which it is built."}, {"text": "An ensemble is itself a supervised learning algorithm, because it can be trained and then used to make predictions. The trained ensemble, therefore, represents a single hypothesis. This hypothesis, however, is not necessarily contained within the hypothesis space of the models from which it is built."}, {"text": "Step-based learning schedules changes the learning rate according to some pre defined steps. The decay application formula is here defined as:"}, {"text": "Step-based learning schedules changes the learning rate according to some pre defined steps. The decay application formula is here defined as:"}, {"text": "Step-based learning schedules changes the learning rate according to some pre defined steps. The decay application formula is here defined as:"}, {"text": "Step-based learning schedules changes the learning rate according to some pre defined steps. The decay application formula is here defined as:"}]}, {"question": "What is time series data in statistics", "positive_ctxs": [{"text": "Time series data means that data is in a series of particular time periods or intervals. The data is considered in three types: Time series data: A set of observations on the values that a variable takes at different times. Cross-sectional data: Data of one or more variables, collected at the same point in time."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}, {"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}, {"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}, {"text": "A time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data."}, {"text": "A time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data."}, {"text": "A time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data."}, {"text": "In statistics, a univariate distribution characterizes one variable, although it can be applied in other ways as well. For example, univariate data are composed of a single scalar component. In time series analysis, the whole time series is the \"variable\": a univariate time series is the series of values over time of a single quantity."}]}, {"question": "What are the two types of hierarchical clustering", "positive_ctxs": [{"text": "There are two types of hierarchical clustering, Divisive and Agglomerative."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "For high-dimensional data, many of the existing methods fail due to the curse of dimensionality, which renders particular distance functions problematic in high-dimensional spaces. This led to new clustering algorithms for high-dimensional data that focus on subspace clustering (where only some attributes are used, and cluster models include the relevant attributes for the cluster) and correlation clustering that also looks for arbitrary rotated (\"correlated\") subspace clusters that can be modeled by giving a correlation of their attributes. Examples for such clustering algorithms are CLIQUE and SUBCLU.Ideas from density-based clustering methods (in particular the DBSCAN/OPTICS family of algorithms) have been adapted to subspace clustering (HiSC, hierarchical subspace clustering and DiSH) and correlation clustering (HiCO, hierarchical correlation clustering, 4C using \"correlation connectivity\" and ERiC exploring hierarchical density-based correlation clusters)."}]}, {"question": "What is variance in multiple regression", "positive_ctxs": [{"text": "In terms of linear regression, variance is a measure of how far observed values differ from the average of predicted values, i.e., their difference from the predicted value mean. The goal is to have a value that is low."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In linear regression analysis, one is concerned with partitioning variance via the sum of squares calculations \u2013 variance in the criterion is essentially divided into variance accounted for by the predictors and residual variance. In logistic regression analysis, deviance is used in lieu of a sum of squares calculations. Deviance is analogous to the sum of squares calculations in linear regression and is a measure of the lack of fit to the data in a logistic regression model."}, {"text": "In linear regression analysis, one is concerned with partitioning variance via the sum of squares calculations \u2013 variance in the criterion is essentially divided into variance accounted for by the predictors and residual variance. In logistic regression analysis, deviance is used in lieu of a sum of squares calculations. Deviance is analogous to the sum of squares calculations in linear regression and is a measure of the lack of fit to the data in a logistic regression model."}, {"text": "In linear regression analysis, one is concerned with partitioning variance via the sum of squares calculations \u2013 variance in the criterion is essentially divided into variance accounted for by the predictors and residual variance. In logistic regression analysis, deviance is used in lieu of a sum of squares calculations. Deviance is analogous to the sum of squares calculations in linear regression and is a measure of the lack of fit to the data in a logistic regression model."}, {"text": "Commonality analysis is a statistical technique within multiple linear regression that decomposes a model's R2 statistic (i.e., explained variance) by all independent variables on a dependent variable in a multiple linear regression model into commonality coefficients. These coefficients are variance components that are uniquely explained by each independent variable (i.e., unique effects), and variance components that are shared in each possible combination of the independent variables (i.e., common effects). These commonality coefficients sum up to the total variance explained (model R2) of all the independent variables on the dependent variable."}, {"text": "For instance, in multiple regression analysis, the power for detecting an effect of a given size is related to the variance of the covariate. Since different covariates will have different variances, their powers will differ as well."}, {"text": "Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term, multivariate linear regression, refers to cases where y is a vector, i.e., the same as general linear regression."}, {"text": "Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term, multivariate linear regression, refers to cases where y is a vector, i.e., the same as general linear regression."}]}, {"question": "What is the difference between a probability distribution function and a cumulative", "positive_ctxs": [{"text": "PDF according to input X being discrete or continuous generates probability mass functions and CDF does the same but generates cumulative mass function. That means, PDF is derivative of CDF and CDF can be applied at any point where PDF has been applied. The cumulative function is the integral of the density function."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}, {"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}, {"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}, {"text": "A probability distribution can be described in various forms, such as by a probability mass function or a cumulative distribution function. One of the most general descriptions, which applies for continuous and discrete variables, is by means of a probability function"}, {"text": "A probability distribution can be described in various forms, such as by a probability mass function or a cumulative distribution function. One of the most general descriptions, which applies for continuous and discrete variables, is by means of a probability function"}, {"text": "A probability distribution can be described in various forms, such as by a probability mass function or a cumulative distribution function. One of the most general descriptions, which applies for continuous and discrete variables, is by means of a probability function"}, {"text": "A probability distribution can be described in various forms, such as by a probability mass function or a cumulative distribution function. One of the most general descriptions, which applies for continuous and discrete variables, is by means of a probability function"}]}, {"question": "How can you improve the accuracy of an object detection", "positive_ctxs": [{"text": "6 Freebies to Help You Increase the Performance of Your Object Detection ModelsVisually Coherent Image Mix-up for Object Detection (+3.55% mAP Boost)Classification Head Label Smoothening (+2.16% mAP Boost)Data Pre-processing (Mixed Results)Training Scheduler Revamping (+1.44% mAP Boost)More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Moreover, the developing use of propensity score matching to match participants on variables important to the treatment selection process can also improve the accuracy of quasi-experimental results."}, {"text": "Post-pruning (or just pruning) is the most common way of simplifying trees. Here, nodes and subtrees are replaced with leaves to improve complexity. Pruning can not only significantly reduce the size but also improve the classification accuracy of unseen objects."}, {"text": "Determine the input feature representation of the learned function. The accuracy of the learned function depends strongly on how the input object is represented. Typically, the input object is transformed into a feature vector, which contains a number of features that are descriptive of the object."}, {"text": "Determine the input feature representation of the learned function. The accuracy of the learned function depends strongly on how the input object is represented. Typically, the input object is transformed into a feature vector, which contains a number of features that are descriptive of the object."}, {"text": "Fadel chose an appropriate intermediate design variable for each function based on a gradient matching condition for the previous point. Vanderplaats initiated a second generation of high quality approximations when he developed the force approximation as an intermediate response approximation to improve the approximation of stress constraints. Canfield developed a Rayleigh quotient approximation to improve the accuracy of eigenvalue approximations."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The determination of consistent clusters is performed rapidly by using an efficient hash table implementation of the generalised Hough transform. Each cluster of 3 or more features that agree on an object and its pose is then subject to further detailed model verification and subsequently outliers are discarded. Finally the probability that a particular set of features indicates the presence of an object is computed, given the accuracy of fit and number of probable false matches."}]}, {"question": "What are the conditions for conducting a chi square goodness of fit test", "positive_ctxs": [{"text": "The chi-square goodness of fit test is appropriate when the following conditions are met: The sampling method is simple random sampling. The variable under study is categorical. The expected value of the number of sample observations in each level of the variable is at least 5."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Under certain assumptions, the OLS estimator has a normal asymptotic distribution when properly normalized and centered (even when the data does not come from a normal distribution). This result is used to justify using a normal distribution, or a chi square distribution (depending on how the test statistic is calculated), when conducting a hypothesis test. This holds even under heteroscedasticity."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "It is not consistent for the sample median. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom."}, {"text": "The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measures can be used in statistical hypothesis testing, e.g."}]}, {"question": "Is R the slope of the regression line", "positive_ctxs": [{"text": "In this context, correlation only makes sense if the relationship is indeed linear. Second, the slope of the regression line is proportional to the correlation coefficient: slope = r*(SD of y)/(SD of x) Third: the square of the correlation, called \"R-squared\", measures the \"fit\" of the regression line to the data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "variables of the left and the right halves and estimating the slope of the line joining these two points. The line could then be adjusted to fit the majority of the points in the data set."}, {"text": "In this case, the slope of the fitted line is equal to the correlation between y and x corrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that the line passes through the center of mass (x, y) of the data points."}, {"text": "In this case, the slope of the fitted line is equal to the correlation between y and x corrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that the line passes through the center of mass (x, y) of the data points."}, {"text": "The Theil\u2013Sen estimator is a simple robust estimation technique that chooses the slope of the fit line to be the median of the slopes of the lines through pairs of sample points. It has similar statistical efficiency properties to simple linear regression but is much less sensitive to outliers."}, {"text": "The Theil\u2013Sen estimator is a simple robust estimation technique that chooses the slope of the fit line to be the median of the slopes of the lines through pairs of sample points. It has similar statistical efficiency properties to simple linear regression but is much less sensitive to outliers."}, {"text": "The Theil\u2013Sen estimator is a simple robust estimation technique that chooses the slope of the fit line to be the median of the slopes of the lines through pairs of sample points. It has similar statistical efficiency properties to simple linear regression but is much less sensitive to outliers."}, {"text": "The Theil\u2013Sen estimator is a simple robust estimation technique that chooses the slope of the fit line to be the median of the slopes of the lines through pairs of sample points. It has similar statistical efficiency properties to simple linear regression but is much less sensitive to outliers."}]}, {"question": "What are the disadvantages of using a histogram", "positive_ctxs": [{"text": "Weaknesses. Histograms have many benefits, but there are two weaknesses. A histogram can present data that is misleading. For example, using too many blocks can make analysis difficult, while too few can leave out important data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram Mi of a histogram mj is defined as:"}, {"text": "A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram Mi of a histogram mj is defined as:"}, {"text": "Instead of using a 4\u00d74 grid of histogram bins, all bins extend to the center of the feature. This improves the descriptor's robustness to scale changes."}, {"text": "In a more general mathematical sense, a histogram is a function mi that counts the number of observations that fall into each of the disjoint categories (known as bins), whereas the graph of a histogram is merely one way to represent a histogram. Thus, if we let n be the total number of observations and k be the total number of bins, the histogram mi meets the following conditions:"}, {"text": "In a more general mathematical sense, a histogram is a function mi that counts the number of observations that fall into each of the disjoint categories (known as bins), whereas the graph of a histogram is merely one way to represent a histogram. Thus, if we let n be the total number of observations and k be the total number of bins, the histogram mi meets the following conditions:"}, {"text": "As the adjacent bins leave no gaps, the rectangles of a histogram touch each other to indicate that the original variable is continuous.Histograms give a rough sense of the density of the underlying distribution of the data, and often for density estimation: estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all 1, then a histogram is identical to a relative frequency plot."}, {"text": "As the adjacent bins leave no gaps, the rectangles of a histogram touch each other to indicate that the original variable is continuous.Histograms give a rough sense of the density of the underlying distribution of the data, and often for density estimation: estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all 1, then a histogram is identical to a relative frequency plot."}]}, {"question": "How do you determine a false positive rate", "positive_ctxs": [{"text": "The false positive rate is calculated as FP/FP+TN, where FP is the number of false positives and TN is the number of true negatives (FP+TN being the total number of negatives). It's the probability that a false alarm will be raised: that a positive result will be given when the true value is negative."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": ")As opposed to that, the false positive rate is associated with a post-prior result, which is the expected number of false positives divided by the total number of hypotheses under the real combination of true and non-true null hypotheses (disregarding the \"global null\" hypothesis). Since the false positive rate is a parameter that is not controlled by the researcher, it cannot be identified with the significance level.Moreover, false positive rate is usually used regarding a medical test or diagnostic device (i.e. \"the false positive rate of a certain diagnostic device is 1%\"), while type I error is a term associated with statistical tests, where the meaning of the word \"positive\" is not as clear (i.e."}, {"text": ")As opposed to that, the false positive rate is associated with a post-prior result, which is the expected number of false positives divided by the total number of hypotheses under the real combination of true and non-true null hypotheses (disregarding the \"global null\" hypothesis). Since the false positive rate is a parameter that is not controlled by the researcher, it cannot be identified with the significance level.Moreover, false positive rate is usually used regarding a medical test or diagnostic device (i.e. \"the false positive rate of a certain diagnostic device is 1%\"), while type I error is a term associated with statistical tests, where the meaning of the word \"positive\" is not as clear (i.e."}, {"text": "When performing multiple comparisons in a statistical framework such as above, the false positive ratio (also known as the false alarm ratio, as opposed to false positive rate / false alarm rate ) usually refers to the probability of falsely rejecting the null hypothesis for a particular test. Using the terminology suggested here, it is simply"}, {"text": "When performing multiple comparisons in a statistical framework such as above, the false positive ratio (also known as the false alarm ratio, as opposed to false positive rate / false alarm rate ) usually refers to the probability of falsely rejecting the null hypothesis for a particular test. Using the terminology suggested here, it is simply"}, {"text": "False positive mammograms are costly, with over $100 million spent annually in the U.S. on follow-up testing and treatment. They also cause women unneeded anxiety. As a result of the high false positive rate in the US, as many as 90\u201395% of women who get a positive mammogram do not have the condition."}, {"text": "False positive mammograms are costly, with over $100 million spent annually in the U.S. on follow-up testing and treatment. They also cause women unneeded anxiety. As a result of the high false positive rate in the US, as many as 90\u201395% of women who get a positive mammogram do not have the condition."}]}, {"question": "Which regression model is best", "positive_ctxs": [{"text": "Statistical Methods for Finding the Best Regression ModelAdjusted R-squared and Predicted R-squared: Generally, you choose the models that have higher adjusted and predicted R-squared values. P-values for the predictors: In regression, low p-values indicate terms that are statistically significant.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Newton's method requires the 2nd order derivatives, so for each iteration, the number of function calls is in the order of N\u00b2, but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself."}, {"text": "will generally be small but not necessarily zero. Which of these regimes is more relevant depends on the specific data set at hand."}, {"text": "The general linear model or general multivariate regression model is simply a compact way of simultaneously writing several multiple linear regression models. In that sense it is not a separate statistical linear model. The various multiple linear regression models may be compactly written as"}, {"text": "The general linear model or general multivariate regression model is simply a compact way of simultaneously writing several multiple linear regression models. In that sense it is not a separate statistical linear model. The various multiple linear regression models may be compactly written as"}, {"text": "Which treatment is considered better is determined by an inequality between two ratios (successes/total). The reversal of the inequality between the ratios, which creates Simpson's paradox, happens because two effects occur together:"}, {"text": "In statistics, Poisson regression is a generalized linear model form of regression analysis used to model count data and contingency tables. Poisson regression assumes the response variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters. A Poisson regression model is sometimes known as a log-linear model, especially when used to model contingency tables."}, {"text": "To minimize MSE, the model could be more accurate, which would mean the model is closer to actual data. One example of a linear regression using this method is the least squares method\u2014which evaluates appropriateness of linear regression model to model bivariate dataset, but whose the limitation is related to known distribution of the data."}]}, {"question": "What is a positive skew in statistics", "positive_ctxs": [{"text": "In statistics, a positively skewed (or right-skewed) distribution is a type of distribution in which most values are clustered around the left tail of the distribution while the right tail of the distribution is longer."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "For example, in the distribution of adult residents across US households, the skew is to the right. However, since the majority of cases is less than or equal to the mode, which is also the median, the mean sits in the heavier left tail. As a result, the rule of thumb that the mean is right of the median under right skew failed."}, {"text": "For example, in the distribution of adult residents across US households, the skew is to the right. However, since the majority of cases is less than or equal to the mode, which is also the median, the mean sits in the heavier left tail. As a result, the rule of thumb that the mean is right of the median under right skew failed."}]}, {"question": "Why does ridge regression reduce variance", "positive_ctxs": [{"text": "Ridge regression has an additional factor called \u03bb (lambda) which is called the penalty factor which is added while estimating beta coefficients. This penalty factor penalizes high value of beta which in turn shrinks beta coefficients thereby reducing the mean squared error and predicted error."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Compared to ordinary least squares, ridge regression is not unbiased. It accepts little bias to reduce variance and the mean square error, and helps to improve the prediction accuracy. Thus, ridge estimator yields more stable solutions by shrinking coefficients but suffers from the lack of sensitivity to the data."}, {"text": "The bias\u2013variance decomposition forms the conceptual basis for regression regularization methods such as Lasso and ridge regression. Regularization methods introduce bias into the regression solution that can reduce variance considerably relative to the ordinary least squares (OLS) solution. Although the OLS solution provides non-biased regression estimates, the lower variance solutions produced by regularization techniques provide superior MSE performance."}, {"text": "At the time, ridge regression was the most popular technique for improving prediction accuracy. Ridge regression improves prediction error by shrinking the sum of the squares of the regression coefficients to be less than a fixed value in order to reduce overfitting, but it does not perform covariate selection and therefore does not help to make the model more interpretable."}, {"text": "Just as ridge regression can be interpreted as linear regression for which the coefficients have been assigned normal prior distributions, lasso can be interpreted as linear regression for which the coefficients have Laplace prior distributions. The Laplace distribution is sharply peaked at zero (its first derivative is discontinuous) and it concentrates its probability mass closer to zero than does the normal distribution. This provides an alternative explanation of why lasso tends to set some coefficients to zero, while ridge regression does not."}, {"text": "Lasso can set coefficients to zero, while the superficially similar ridge regression cannot. This is due to the difference in the shape of their constraint boundaries. Both lasso and ridge regression can be interpreted as minimizing the same objective function"}, {"text": "Therefore, the lasso estimates share features of both ridge and best subset selection regression since they both shrink the magnitude of all the coefficients, like ridge regression and set some of them to zero, as in the best subset selection case. Additionally, while ridge regression scales all of the coefficients by a constant factor, lasso instead translates the coefficients towards zero by a constant value and sets them to zero if they reach it."}, {"text": "(In fact, ridge regression and lasso regression can both be viewed as special cases of Bayesian linear regression, with particular types of prior distributions placed on the regression coefficients.)"}]}, {"question": "Which algorithm is used for face detection", "positive_ctxs": [{"text": "Eigenface"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "AdaBoost can be used for face detection as an example of binary categorization. The two categories are faces versus background. The general algorithm is as follows:"}, {"text": "AdaBoost can be used for face detection as an example of binary categorization. The two categories are faces versus background. The general algorithm is as follows:"}, {"text": "AdaBoost can be used for face detection as an example of binary categorization. The two categories are faces versus background. The general algorithm is as follows:"}, {"text": "AdaBoost can be used for face detection as an example of binary categorization. The two categories are faces versus background. The general algorithm is as follows:"}, {"text": "AdaBoost can be used for face detection as an example of binary categorization. The two categories are faces versus background. The general algorithm is as follows:"}, {"text": "AdaBoost can be used for face detection as an example of binary categorization. The two categories are faces versus background. The general algorithm is as follows:"}, {"text": "AdaBoost can be used for face detection as an example of binary categorization. The two categories are faces versus background. The general algorithm is as follows:"}]}, {"question": "What is the learning rate in the context of deep learning", "positive_ctxs": [{"text": "Learning Rate and Gradient Descent Specifically, the learning rate is a configurable hyperparameter used in the training of neural networks that has a small positive value, often in the range between 0.0 and 1.0. The learning rate controls how quickly the model is adapted to the problem."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}, {"text": "While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum.In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method."}]}, {"question": "What is Attention neural network", "positive_ctxs": [{"text": "Informally, a neural attention mechanism equips a neural network with the ability to focus on a subset of its inputs (or features): it selects specific inputs."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "A feedforward neural network is an artificial neural network wherein connections between the nodes do not form a cycle. As such, it is different from its descendant: recurrent neural networks."}, {"text": "A feedforward neural network is an artificial neural network wherein connections between the nodes do not form a cycle. As such, it is different from its descendant: recurrent neural networks."}, {"text": "A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus a neural network is either a biological neural network, made up of real biological neurons, or an artificial neural network, for solving artificial intelligence (AI) problems. The connections of the biological neuron are modeled as weights."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}]}, {"question": "Why are p values considered confounded statistics", "positive_ctxs": [{"text": "Seriously, the p value is literally a confounded index because it reflects both the size of the underlying effect and the size of the sample. Hence any information included in the p value is ambiguous (Lang et al. 1998). The smaller the sample, the less likely the result will be statistically significant."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This distribution was derived by Jacob Bernoulli. He considered the case where p = r/(r + s) where p is the probability of success and r and s are positive integers. Blaise Pascal had earlier considered the case where p = 1/2."}, {"text": "This distribution was derived by Jacob Bernoulli. He considered the case where p = r/(r + s) where p is the probability of success and r and s are positive integers. Blaise Pascal had earlier considered the case where p = 1/2."}, {"text": "Then for future periods the same procedure is used, each time using one more forecast value on the right side of the predictive equation until, after p predictions, all p right-side values are predicted values from preceding steps."}, {"text": "Then for future periods the same procedure is used, each time using one more forecast value on the right side of the predictive equation until, after p predictions, all p right-side values are predicted values from preceding steps."}, {"text": "The compound p \u2192 q is false if and only if p is true and q is false. By the same stroke, p \u2192 q is true if and only if either p is false or q is true (or both). The \u2192 symbol is a function that uses pairs of truth values of the components p, q (e.g., p is True, q is True ... p is False, q is False) and maps it to the truth values of the compound p \u2192 q."}, {"text": "For p = 0 and p = \u221e these functions are defined by taking limits, respectively as p \u2192 0 and p \u2192 \u221e. For p = 0 the limiting values are 00 = 0 and a0 = 0 or a \u2260 0, so the difference becomes simply equality, so the 0-norm counts the number of unequal points. For p = \u221e the largest number dominates, and thus the \u221e-norm is the maximum difference."}, {"text": "For p = 0 and p = \u221e these functions are defined by taking limits, respectively as p \u2192 0 and p \u2192 \u221e. For p = 0 the limiting values are 00 = 0 and a0 = 0 or a \u2260 0, so the difference becomes simply equality, so the 0-norm counts the number of unequal points. For p = \u221e the largest number dominates, and thus the \u221e-norm is the maximum difference."}]}, {"question": "How far away are we from AGI", "positive_ctxs": [{"text": "However, experts expect that it won't be until 2060 until AGI has gotten good enough to pass a \"consciousness test\". In other words, we're probably looking at 40 years from now before we see an AI that could pass for a human."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Viana, F.A.C., Simpson, T.W., Balabanov, V. and Toropov, V. \"Metamodeling in multidisciplinary design optimization: How far have we really come?\" AIAA Journal 52 (4) 670-690, 2014 (DOI: 10.2514/1.J052375)"}, {"text": "important property of such high dimensional spaces is that two randomly chosen vectors are relatively far away from each other, meaning that they are uncorrelated. SDM can be considered a realization of Locality-sensitive hashing."}, {"text": "Signal detection theory (SDT) is used when psychologists want to measure the way we make decisions under conditions of uncertainty, such as how we would perceive distances in foggy conditions or during eyewitness identification. SDT assumes that the decision maker is not a passive receiver of information, but an active decision-maker who makes difficult perceptual judgments under conditions of uncertainty. In foggy circumstances, we are forced to decide how far away from us an object is, based solely upon visual stimulus which is impaired by the fog."}, {"text": "Different statistical learning techniques have different limitations; for example, basic HMM cannot model the infinite possible combinations of natural language. Critics note that the shift from GOFAI to statistical learning is often also a shift away from explainable AI. In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence."}, {"text": "Different statistical learning techniques have different limitations; for example, basic HMM cannot model the infinite possible combinations of natural language. Critics note that the shift from GOFAI to statistical learning is often also a shift away from explainable AI. In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence."}, {"text": "Different statistical learning techniques have different limitations; for example, basic HMM cannot model the infinite possible combinations of natural language. Critics note that the shift from GOFAI to statistical learning is often also a shift away from explainable AI. In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence."}, {"text": "To shrink the possible space of valid actions multiple values can be assigned to a bucket. The exact distance of the finger from its starting position (-Infinity to Infinity) is not known, but rather whether it is far away or not (Near, Far)."}]}, {"question": "Can a 2 layer neural network represent the XOR function", "positive_ctxs": [{"text": "A two layer (one input layer, one output layer; no hidden layer) neural network can represent the XOR function. We must compose multiple logical operations by using a hidden layer to represent the XOR function."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Experiment 1: Learning the ones and twos addition factsIn their first experiment they trained a standard backpropagation neural network on a single training set consisting of 17 single-digit ones problems (i.e., 1 + 1 through 9 + 1, and 1 + 2 through 1 + 9) until the network could represent and respond properly to all of them. The error between the actual output and the desired output steadily declined across training sessions, which reflected that the network learned to represent the target outputs better across trials. Next, they trained the network on a single training set consisting of 17 single-digit twos problems (i.e., 2 + 1 through 2 + 9, and 1 + 2 through 9 + 2) until the network could represent, respond properly to all of them."}, {"text": "An autoencoder is a neural network that learns to copy its input to its output. It has an internal (hidden) layer that describes a code used to represent the input, and it is constituted by two main parts: an encoder that maps the input into the code, and a decoder that maps the code to a reconstruction of the input."}, {"text": "A network function associated with a neural network characterizes the relationship between input and output layers, which is parameterized by the weights. With appropriately defined network functions, various learning tasks can be performed by minimizing a cost function over the network function (weights)."}, {"text": "A network function associated with a neural network characterizes the relationship between input and output layers, which is parameterized by the weights. With appropriately defined network functions, various learning tasks can be performed by minimizing a cost function over the network function (weights)."}, {"text": "A network function associated with a neural network characterizes the relationship between input and output layers, which is parameterized by the weights. With appropriately defined network functions, various learning tasks can be performed by minimizing a cost function over the network function (weights)."}, {"text": "Fully connected layers connect every neuron in one layer to every neuron in another layer. It is the same as a traditional multi-layer perceptron neural network (MLP). The flattened matrix goes through a fully connected layer to classify the images."}, {"text": "Fully connected layers connect every neuron in one layer to every neuron in another layer. It is the same as a traditional multi-layer perceptron neural network (MLP). The flattened matrix goes through a fully connected layer to classify the images."}]}, {"question": "What is an F distribution in statistics", "positive_ctxs": [{"text": "The F Distribution The distribution of all possible values of the f statistic is called an F distribution, with v1 = n1 - 1 and v2 = n2 - 1 degrees of freedom. The curve of the F distribution depends on the degrees of freedom, v1 and v2."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "at which F is continuous. Here Fn and F are the cumulative distribution functions of random variables Xn and X, respectively."}, {"text": "Let a(f) denote the output of search algorithm a on input f. If a(F) and b(F) are identically distributed for all search algorithms a and b, then F has an NFL distribution. This condition holds if and only if F and F o j are identically distributed for all j in J. In other words, there is no free lunch for search algorithms if and only if the distribution of objective functions is invariant under permutation of the solution space."}]}, {"question": "What are the applications of stochastic process", "positive_ctxs": [{"text": "The focus will especially be on applications of stochastic processes as key technologies in various research areas, such as Markov chains, renewal theory, control theory, nonlinear theory, queuing theory, risk theory, communication theory engineering and traffic engineering."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This means that the distribution of the stochastic process does not, necessarily, specify uniquely the properties of the sample functions of the stochastic process.Another problem is that functionals of continuous-time process that rely upon an uncountable number of points of the index set may not be measurable, so the probabilities of certain events may not be well-defined. For example, the supremum of a stochastic process or random field is not necessarily a well-defined random variable. For a continuous-time stochastic process"}, {"text": "This means that the distribution of the stochastic process does not, necessarily, specify uniquely the properties of the sample functions of the stochastic process.Another problem is that functionals of continuous-time process that rely upon an uncountable number of points of the index set may not be measurable, so the probabilities of certain events may not be well-defined. For example, the supremum of a stochastic process or random field is not necessarily a well-defined random variable. For a continuous-time stochastic process"}, {"text": "The process also has many applications and is the main stochastic process used in stochastic calculus. It plays a central role in quantitative finance, where it is used, for example, in the Black\u2013Scholes\u2013Merton model. The process is also used in different fields, including the majority of natural sciences as well as some branches of social sciences, as a mathematical model for various random phenomena."}, {"text": "The process also has many applications and is the main stochastic process used in stochastic calculus. It plays a central role in quantitative finance, where it is used, for example, in the Black\u2013Scholes\u2013Merton model. The process is also used in different fields, including the majority of natural sciences as well as some branches of social sciences, as a mathematical model for various random phenomena."}, {"text": "-dimensional Euclidean space.The concept of separability of a stochastic process was introduced by Joseph Doob,. The underlying idea of separability is to make a countable set of points of the index set determine the properties of the stochastic process. Any stochastic process with a countable index set already meets the separability conditions, so discrete-time stochastic processes are always separable."}, {"text": "-dimensional Euclidean space.The concept of separability of a stochastic process was introduced by Joseph Doob,. The underlying idea of separability is to make a countable set of points of the index set determine the properties of the stochastic process. Any stochastic process with a countable index set already meets the separability conditions, so discrete-time stochastic processes are always separable."}, {"text": "An increment of a stochastic process is the difference between two random variables of the same stochastic process. For a stochastic process with an index set that can be interpreted as time, an increment is how much the stochastic process changes over a certain time period."}]}, {"question": "What are the advantages of machine learning", "positive_ctxs": [{"text": "Advantages of Machine LearningContinuous Improvement. Machine Learning algorithms are capable of learning from the data we provide. Automation for everything. Trends and patterns identification. Wide range of applications. Data Acquisition. Highly error-prone. Algorithm Selection. Time-consuming."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Automated machine learning (AutoML) is the process of automating the process of applying machine learning to real-world problems. AutoML covers the complete pipeline from the raw dataset to the deployable machine learning model. AutoML was proposed as an artificial intelligence-based solution to the ever-growing challenge of applying machine learning."}, {"text": "Automated machine learning (AutoML) is the process of automating the process of applying machine learning to real-world problems. AutoML covers the complete pipeline from the raw dataset to the deployable machine learning model. AutoML was proposed as an artificial intelligence-based solution to the ever-growing challenge of applying machine learning."}, {"text": "Automated machine learning (AutoML) is the process of automating the process of applying machine learning to real-world problems. AutoML covers the complete pipeline from the raw dataset to the deployable machine learning model. AutoML was proposed as an artificial intelligence-based solution to the ever-growing challenge of applying machine learning."}, {"text": "Embedded methods have been recently proposed that try to combine the advantages of both previous methods. A learning algorithm takes advantage of its own variable selection process and performs feature selection and classification simultaneously, such as the FRMT algorithm."}, {"text": "Embedded methods have been recently proposed that try to combine the advantages of both previous methods. A learning algorithm takes advantage of its own variable selection process and performs feature selection and classification simultaneously, such as the FRMT algorithm."}]}, {"question": "What do you do when a variable is correlated", "positive_ctxs": [{"text": "The potential solutions include the following:Remove some of the highly correlated independent variables.Linearly combine the independent variables, such as adding them together.Perform an analysis designed for highly correlated variables, such as principal components analysis or partial least squares regression."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influence a child's cognitive functioning?). Then you can still do research, but it is not causal, it is correlational."}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "Do predictive analytics drive more informed decisions", "positive_ctxs": [{"text": "According to SAS, predictive analytics is \u201cthe use of data, statistical algorithms and machine learning techniques to identify the likelihood of future outcomes based on historical data. In short, predictive intelligence drives marketing decisions.\u201d"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "By using predictive modelling in their cultural resource management plans, they are capable of making more informed decisions when planning for activities that have the potential to require ground disturbance and subsequently affect archaeological sites."}, {"text": "By using predictive modelling in their cultural resource management plans, they are capable of making more informed decisions when planning for activities that have the potential to require ground disturbance and subsequently affect archaeological sites."}, {"text": "Analytics is the \"extensive use of data, statistical and quantitative analysis, explanatory and predictive models, and fact-based management to drive decisions and actions.\" It is a subset of business intelligence, which is a set of technologies and processes that use data to understand and analyze business performance."}, {"text": "The emergence of Big Data in the late 2000s led to a heightened interest in the applications of unstructured data analytics in contemporary fields such as predictive analytics and root cause analysis."}, {"text": "Risk assessment is much more than an aid to informed decisions making about risk reduction or acceptance. It integrates early warning systems by highlighting the hot spots where disaster prevention and preparedness are most urgent. When risk assessment considers the dynamics of exposure over time, it helps to identify risk reduction policies that are more appropriate to the local context."}, {"text": "Differentiating the fields of educational data mining (EDM) and learning analytics (LA) has been a concern of several researchers. George Siemens takes the position that educational data mining encompasses both learning analytics and academic analytics, the former of which is aimed at governments, funding agencies, and administrators instead of learners and faculty. Baepler and Murdoch define academic analytics as an area that \"...combines select institutional data, statistical analysis, and predictive modeling to create intelligence upon which learners, instructors, or administrators can change academic behavior\"."}, {"text": "Allowing consumers to make informed purchasing decisions on the products they offer for sale. In June 2009, Apple's iPhone 3GS was free of PVC, arsenic, and BFRs. All Apple products now have mercury-free LED-backlit LCD displays, arsenic-free glass, and non-PVC cables."}]}, {"question": "Why is POS tagging useful", "positive_ctxs": [{"text": "POS tags make it possible for automatic text processing tools to take into account which part of speech each word is. This facilitates the use of linguistic criteria in addition to statistics."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Unlike the Brill tagger where the rules are ordered sequentially, the POS and morphological tagging toolkit RDRPOSTagger stores rule in the form of a ripple-down rules tree."}, {"text": "POS tagging work has been done in a variety of languages, and the set of POS tags used varies greatly with language. Tags usually are designed to include overt morphological distinctions, although this leads to inconsistencies such as case-marking for pronouns but not nouns in English, and much larger cross-language differences. The tag sets for heavily inflected languages such as Greek and Latin can be very large; tagging words in agglutinative languages such as Inuit languages may be virtually impossible."}, {"text": "The most popular \"tag set\" for POS tagging for American English is probably the Penn tag set, developed in the Penn Treebank project. It is largely similar to the earlier Brown Corpus and LOB Corpus tag sets, though much smaller. In Europe, tag sets from the Eagles Guidelines see wide use and include versions for multiple languages."}, {"text": "For example, NN for singular common nouns, NNS for plural common nouns, NP for singular proper nouns (see the POS tags used in the Brown Corpus). Other tagging systems use a smaller number of tags and ignore fine differences or model them as features somewhat independent from part-of-speech.In part-of-speech tagging by computer, it is typical to distinguish from 50 to 150 separate parts of speech for English. Work on stochastic methods for tagging Koine Greek (DeRose 1990) has used over 1,000 parts of speech and found that about as many words were ambiguous in that language as in English."}, {"text": "Once performed by hand, POS tagging is now done in the context of computational linguistics, using algorithms which associate discrete terms, as well as hidden parts of speech, by a set of descriptive tags. POS-tagging algorithms fall into two distinctive groups: rule-based and stochastic. E. Brill's tagger, one of the first and most widely used English POS-taggers, employs rule-based algorithms."}, {"text": "EBR (Exception based reporting) is a data analytics software that identifies the high risk of POS activity and flags the transactions and employees for investigation by the retailer. Some companies use a third party to perform the EBR analysis which essentially does the same thing as an EBR software. It identifies specific patterns in POS activity and when potential fraudulent POS activity is identified, the loss prevention will investigate to determine if the behaviour is intentional theft, policy violation or an inadvertent error that can be solved with additional training."}, {"text": "Sequence tagging is a class of problems prevalent in natural language processing, where input data are often sequences (e.g. The sequence tagging problem appears in several guises, e.g. part-of-speech tagging and named entity recognition."}]}, {"question": "What is a good false positive rate", "positive_ctxs": [{"text": "(Example: a test with 90% specificity will correctly return a negative result for 90% of people who don't have the disease, but will return a positive result \u2014 a false-positive \u2014 for 10% of the people who don't have the disease and should have tested negative.)"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": ")As opposed to that, the false positive rate is associated with a post-prior result, which is the expected number of false positives divided by the total number of hypotheses under the real combination of true and non-true null hypotheses (disregarding the \"global null\" hypothesis). Since the false positive rate is a parameter that is not controlled by the researcher, it cannot be identified with the significance level.Moreover, false positive rate is usually used regarding a medical test or diagnostic device (i.e. \"the false positive rate of a certain diagnostic device is 1%\"), while type I error is a term associated with statistical tests, where the meaning of the word \"positive\" is not as clear (i.e."}, {"text": ")As opposed to that, the false positive rate is associated with a post-prior result, which is the expected number of false positives divided by the total number of hypotheses under the real combination of true and non-true null hypotheses (disregarding the \"global null\" hypothesis). Since the false positive rate is a parameter that is not controlled by the researcher, it cannot be identified with the significance level.Moreover, false positive rate is usually used regarding a medical test or diagnostic device (i.e. \"the false positive rate of a certain diagnostic device is 1%\"), while type I error is a term associated with statistical tests, where the meaning of the word \"positive\" is not as clear (i.e."}, {"text": "A false positive error is a type I error where the test is checking a single condition, and wrongly gives an affirmative (positive) decision. However it is important to distinguish between the type 1 error rate and the probability of a positive result being false. The latter is known as the false positive risk (see Ambiguity in the definition of false positive rate, below)."}, {"text": "A false positive error is a type I error where the test is checking a single condition, and wrongly gives an affirmative (positive) decision. However it is important to distinguish between the type 1 error rate and the probability of a positive result being false. The latter is known as the false positive risk (see Ambiguity in the definition of false positive rate, below)."}, {"text": "A false positive error is a type I error where the test is checking a single condition, and wrongly gives an affirmative (positive) decision. However it is important to distinguish between the type 1 error rate and the probability of a positive result being false. The latter is known as the false positive risk (see Ambiguity in the definition of false positive rate, below)."}, {"text": "A false positive error is a type I error where the test is checking a single condition, and wrongly gives an affirmative (positive) decision. However it is important to distinguish between the type 1 error rate and the probability of a positive result being false. The latter is known as the false positive risk (see Ambiguity in the definition of false positive rate, below)."}, {"text": "Perhaps the most widely discussed false positives in medical screening come from the breast cancer screening procedure mammography. The US rate of false positive mammograms is up to 15%, the highest in world. One consequence of the high false positive rate in the US is that, in any 10-year period, half of the American women screened receive a false positive mammogram."}]}, {"question": "What is performance measure in machine learning", "positive_ctxs": [{"text": "It basically defined on probability estimates and measures the performance of a classification model where the input is a probability value between 0 and 1. It can be understood more clearly by differentiating it with accuracy."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "What is tensor board", "positive_ctxs": [{"text": "TensorBoard is a suite of web applications for inspecting and understanding your TensorFlow runs and graphs. TensorBoard currently supports five visualizations: scalars, images, audio, histograms, and graphs."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Here w is called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor. An example of a tensor density is the current density of electromagnetism."}, {"text": "Here w is called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor. An example of a tensor density is the current density of electromagnetism."}, {"text": "Thus, the TVP of a tensor to a P-dimensional vector consists of P projections from the tensor to a scalar. The projection from a tensor to a scalar is an elementary multilinear projection (EMP). In EMP, a tensor is projected to a point through N unit projection vectors."}, {"text": "When the first factor is very large with respect to the other factors in the tensor product, then the tensor space essentially behaves as a matrix space. The generic rank of tensors living in an unbalanced tensor spaces is known to equal"}, {"text": "The rank of a tensor depends on the field over which the tensor is decomposed. It is known that some real tensors may admit a complex decomposition whose rank is strictly less than the rank of a real decomposition of the same tensor. As an example, consider the following real tensor"}]}, {"question": "What is the expected value of a discrete distribution", "positive_ctxs": [{"text": "For a discrete random variable, the expected value, usually denoted as or , is calculated using: \u03bc = E ( X ) = \u2211 x i f ( x i )"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": ").In probability and statistics, the population mean, or expected value, is a measure of the central tendency either of a probability distribution or of the random variable characterized by that distribution. In a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability p(x), and then adding all these products together, giving"}, {"text": ").In probability and statistics, the population mean, or expected value, is a measure of the central tendency either of a probability distribution or of the random variable characterized by that distribution. In a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability p(x), and then adding all these products together, giving"}, {"text": ").In probability and statistics, the population mean, or expected value, is a measure of the central tendency either of a probability distribution or of the random variable characterized by that distribution. In a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability p(x), and then adding all these products together, giving"}, {"text": ").In probability and statistics, the population mean, or expected value, is a measure of the central tendency either of a probability distribution or of the random variable characterized by that distribution. In a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability p(x), and then adding all these products together, giving"}, {"text": ").In probability and statistics, the population mean, or expected value, is a measure of the central tendency either of a probability distribution or of the random variable characterized by that distribution. In a discrete probability distribution of a random variable X, the mean is equal to the sum over every possible value weighted by the probability of that value; that is, it is computed by taking the product of each possible value x of X and its probability p(x), and then adding all these products together, giving"}, {"text": "For a data set, the arithmetic mean, also known as average or expected value is the central value of a discrete set of numbers: specifically, the sum of the values divided by the number of values. The arithmetic mean of a set of numbers x1, x2, ..., xn is typically denoted by"}, {"text": "For a data set, the arithmetic mean, also known as average or expected value is the central value of a discrete set of numbers: specifically, the sum of the values divided by the number of values. The arithmetic mean of a set of numbers x1, x2, ..., xn is typically denoted by"}]}, {"question": "How does Sobel edge detection work", "positive_ctxs": [{"text": "The Sobel filter is used for edge detection. It works by calculating the gradient of image intensity at each pixel within the image. The result shows how abruptly or smoothly the image changes at each pixel, and therefore how likely it is that that pixel represents an edge."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The Sobel operator, sometimes called the Sobel\u2013Feldman operator or Sobel filter, is used in image processing and computer vision, particularly within edge detection algorithms where it creates an image emphasising edges. It is named after Irwin Sobel and Gary Feldman, colleagues at the Stanford Artificial Intelligence Laboratory (SAIL). Sobel and Feldman presented the idea of an \"Isotropic 3x3 Image Gradient Operator\" at a talk at SAIL in 1968."}, {"text": "is negative, then the process favors changes in sign between terms of the process. This can be likened to edge detection or detection of change in direction."}, {"text": "is negative, then the process favors changes in sign between terms of the process. This can be likened to edge detection or detection of change in direction."}, {"text": "Specific applications, like step detection and edge detection, may be concerned with changes in the mean, variance, correlation, or spectral density of the process. More generally change detection also includes the detection of anomalous behavior: anomaly detection."}, {"text": "There are several motivations for studying and developing blob detectors. One main reason is to provide complementary information about regions, which is not obtained from edge detectors or corner detectors. In early work in the area, blob detection was used to obtain regions of interest for further processing."}, {"text": "How much does the ball cost?\" many subjects incorrectly answer $0.10. An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do."}, {"text": "Among the approaches that are used to feature description, one can mention N-jets and local histograms (see scale-invariant feature transform for one example of a local histogram descriptor). In addition to such attribute information, the feature detection step by itself may also provide complementary attributes, such as the edge orientation and gradient magnitude in edge detection and the polarity and the strength of the blob in blob detection."}]}, {"question": "When would you use a multinomial", "positive_ctxs": [{"text": "Multinomial logistic regression is used when the dependent variable in question is nominal (equivalently categorical, meaning that it falls into any one of a set of categories that cannot be ordered in any meaningful way) and for which there are more than two categories."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "From 1912 to 1934 Gosset and Fisher would exchange more than 150 letters. In 1924, Gosset wrote in a letter to Fisher, \"I am sending you a copy of Student's Tables as you are the only man that's ever likely to use them!\" Fisher believed that Gosset had effected a \"logical revolution\"."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "As an example, suppose a linear prediction model learns from some data (perhaps primarily drawn from large beaches) that a 10 degree temperature decrease would lead to 1,000 fewer people visiting the beach. This model is unlikely to generalize well over different sized beaches. More specifically, the problem is that if you use the model to predict the new attendance with a temperature drop of 10 for a beach that regularly receives 50 beachgoers, you would predict an impossible attendance value of \u2212950."}]}, {"question": "What is Z score in blood test", "positive_ctxs": [{"text": "A Z score is the number of standard deviations a given result is above (positive score) or below (negative score) the age- and sex-adjusted population mean. Results that are within the IGF-1 reference interval will have a Z score between -2.0 and +2.0."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Contrary to common beliefs, adding covariates to the adjustment set Z can introduce bias. A typical counterexample occurs when Z is a common effect of X and Y, a case in which Z is not a confounder (i.e., the null set is Back-door admissible) and adjusting for Z would create bias known as \"collider bias\" or \"Berkson's paradox.\""}, {"text": "Since the score is a function of the observations that are subject to sampling error, it lends itself to a test statistic known as score test in which the parameter is held at a particular value. Further, the ratio of two likelihood functions evaluated at two distinct parameter values can be understood as a definite integral of the score function."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Algorithms with this basic setup are known as linear classifiers. What distinguishes them is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted."}, {"text": "Chance factors: luck in selection of answers by sheer guessing, momentary distractionsThe goal of estimating reliability is to determine how much of the variability in test scores is due to errors in measurement and how much is due to variability in true scores.A true score is the replicable feature of the concept being measured. It is the part of the observed score that would recur across different measurement occasions in the absence of error."}]}, {"question": "What is sensitivity in machine learning", "positive_ctxs": [{"text": "Sensitivity is a measure of the proportion of actual positive cases that got predicted as positive (or true positive). This implies that there will be another proportion of actual positive cases, which would get predicted incorrectly as negative (and, thus, could also be termed as the false negative)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "Where do we use eigen values", "positive_ctxs": [{"text": "The eigenvalues and eigenvectors of a matrix are often used in the analysis of financial data and are integral in extracting useful information from the raw data. They can be used for predicting stock prices and analyzing correlations between various stocks, corresponding to different companies."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Because few Eigenfaces were used to encode human faces of a given population, Turk and Pentland's PCA face detection method greatly reduced the amount of data that had to be processed to detect a face. Pentland in 1994 defined Eigenface features, including eigen eyes, eigen mouths and eigen noses, to advance the use of PCA in facial recognition. In 1997 the PCA Eigenface method of face recognition was improved upon using linear discriminant analysis (LDA) to produce Fisherfaces."}, {"text": "These metaphors are prevalent in communication and we do not just use them in language; we actually perceive and act in accordance with the metaphors."}, {"text": "For most integrands we can't use the fundamental theorem of calculus to compute the integral analytically; we have to approximate it numerically. We compute the values of"}, {"text": "Another possibility is the randomized setting. For some problems we can break the curse of dimensionality by weakening the assurance; for others, we cannot. There is a large IBC literature on results in various settings; see Where to Learn More below."}, {"text": "While variables in mathematics usually take numerical values, in fuzzy logic applications, non-numeric values are often used to facilitate the expression of rules and facts.A linguistic variable such as age may accept values such as young and its antonym old. Because natural languages do not always contain enough value terms to express a fuzzy value scale, it is common practice to modify linguistic values with adjectives or adverbs. For example, we can use the hedges rather and somewhat to construct the additional values rather old or somewhat young."}, {"text": "the autocorrelation for other lag values being zero. In this calculation we do not perform the carry-over operation during addition as is usual in normal multiplication. Note that we can halve the number of operations required by exploiting the inherent symmetry of the autocorrelation."}, {"text": "the autocorrelation for other lag values being zero. In this calculation we do not perform the carry-over operation during addition as is usual in normal multiplication. Note that we can halve the number of operations required by exploiting the inherent symmetry of the autocorrelation."}]}, {"question": "Can you get SPSS on Mac", "positive_ctxs": [{"text": "IBM SPSS Statistics for Mac is the ultimate tool for managing your statistics data and research. This super-app affords you complete control over your data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "SPSS Statistics version 13.0 for Mac OS X was not compatible with Intel-based Macintosh computers, due to the Rosetta emulation software causing errors in calculations. SPSS Statistics 15.0 for Windows needed a downloadable hotfix to be installed in order to be compatible with Windows Vista."}, {"text": "Addition, multiplication, and exponentiation are three of the most fundamental arithmetic operations. Addition, the simplest of these, is undone by subtraction: when you add 5 to x to get x + 5, to reverse this operation you need to subtract 5 from x + 5. Multiplication, the next-simplest operation, is undone by division: if you multiply x by 5 to get 5x, you then can divide 5x by 5 to return to the original expression x. Logarithms also undo a fundamental arithmetic operation, exponentiation."}, {"text": "Several variants of SPSS Statistics exist. SPSS Statistics Gradpacks are highly discounted versions sold only to students. SPSS Statistics Server is a version of SPSS Statistics with a client/server architecture."}, {"text": "On first glance, internal and external validity seem to contradict each other \u2013 to get an experimental design you have to control for all interfering variables. That is why you often conduct your experiment in a laboratory setting. While gaining internal validity (excluding interfering variables by keeping them constant) you lose ecological or external validity because you establish an artificial laboratory setting."}, {"text": "This expression means that y is equal to the power that you would raise b to, to get x. This operation undoes exponentiation because the logarithm of x tells you the exponent that the base has been raised to."}, {"text": "SPSS Statistics is a software package used for interactive, or batched, statistical analysis. Long produced by SPSS Inc., it was acquired by IBM in 2009. Current versions (post 2015) have the brand name: IBM SPSS Statistics."}, {"text": "Futurama- Bender is a good example of sapient t AI, throughout many episodes, you will see Bender get angry, sad, or other emotions. Bender also having a mind of his own."}]}, {"question": "What is the difference between analytics and machine learning", "positive_ctxs": [{"text": "Despite having similar aims and processes, there are two main differences between them: Machine learning works out predictions and recalibrates models in real-time automatically after design. Meanwhile, predictive analytics works strictly on \u201ccause\u201d data and must be refreshed with \u201cchange\u201d data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}]}, {"question": "What is the difference between independent and independant", "positive_ctxs": [{"text": "The main difference between Independant and Independent is that the Independant is a misspelling of independent and Independent is a Not dependent; free; not subject to control by others; not relying on others."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "there is no dependence between readership of the two journals. That is, reading A and B are independent once educational level is taken into consideration. The educational level 'explains' the difference in reading of A and B."}, {"text": "In psychophysical terms, the size difference between A and C is above the just noticeable difference ('jnd') while the size differences between A and B and B and C are below the jnd."}, {"text": "The main difference between dynamic programming and divide and conquer is that subproblems are more or less independent in divide and conquer, whereas subproblems overlap in dynamic programming. The difference between dynamic programming and straightforward recursion is in caching or memoization of recursive calls. When subproblems are independent and there is no repetition, memoization does not help; hence dynamic programming is not a solution for all complex problems."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration."}, {"text": "the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration."}]}, {"question": "How do you use linear regression to predict future values", "positive_ctxs": [{"text": "Statistical researchers often use a linear relationship to predict the (average) numerical value of Y for a given value of X using a straight line (called the regression line). If you know the slope and the y-intercept of that regression line, then you can plug in a value for X and predict the average value for Y."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "To conduct a Bayes linear analysis it is necessary to identify some values that you expect to know shortly by making measurements D and some future value which you would like to know B. Here D refers to a vector containing data and B to a vector containing quantities you would like to predict. For the following example B and D are taken to be two-dimensional vectors i.e."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "As an example, suppose a linear prediction model learns from some data (perhaps primarily drawn from large beaches) that a 10 degree temperature decrease would lead to 1,000 fewer people visiting the beach. This model is unlikely to generalize well over different sized beaches. More specifically, the problem is that if you use the model to predict the new attendance with a temperature drop of 10 for a beach that regularly receives 50 beachgoers, you would predict an impossible attendance value of \u2212950."}, {"text": "As an example, suppose a linear prediction model learns from some data (perhaps primarily drawn from large beaches) that a 10 degree temperature decrease would lead to 1,000 fewer people visiting the beach. This model is unlikely to generalize well over different sized beaches. More specifically, the problem is that if you use the model to predict the new attendance with a temperature drop of 10 for a beach that regularly receives 50 beachgoers, you would predict an impossible attendance value of \u2212950."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "What is binomial theorem", "positive_ctxs": [{"text": "The binomial theorem is valid more generally for any elements x and y of a semiring satisfying xy = yx. The theorem is true even more generally: alternativity suffices in place of associativity. The binomial theorem can be stated by saying that the polynomial sequence {1, x, x2, x3, } is of binomial type."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "When working in more dimensions, it is often useful to deal with products of binomial expressions. By the binomial theorem this is equal to"}, {"text": "The binomial theorem is closely related to the probability mass function of the negative binomial distribution. The probability of a (countable) collection of independent Bernoulli trials"}, {"text": "The binomial theorem is valid more generally for two elements x and y in a ring, or even a semiring, provided that xy = yx. For example, it holds for two n \u00d7 n matrices, provided that those matrices commute; this is useful in computing powers of a matrix.The binomial theorem can be stated by saying that the polynomial sequence {1, x, x2, x3, ...} is of binomial type."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Special cases of the binomial theorem were known since at least the 4th century BC when Greek mathematician Euclid mentioned the special case of the binomial theorem for exponent 2. There is evidence that the binomial theorem for cubes was known by the 6th century AD in India.Binomial coefficients, as combinatorial quantities expressing the number of ways of selecting k objects out of n without replacement, were of interest to ancient Indian mathematicians. The earliest known reference to this combinatorial problem is the Chanda\u1e25\u015b\u0101stra by the Indian lyricist Pingala (c. 200 BC), which contains a method for its solution."}, {"text": "The generalized binomial theorem can be extended to the case where x and y are complex numbers. For this version, one should again assume |x| > |y| and define the powers of x + y and x using a holomorphic branch of log defined on an open disk of radius |x| centered at x. The generalized binomial theorem is valid also for elements x and y of a Banach algebra as long as xy = yx, and x is invertible, and ||y/x|| < 1."}, {"text": "Around 1665, Isaac Newton generalized the binomial theorem to allow real exponents other than nonnegative integers. (The same generalization also applies to complex exponents.) In this generalization, the finite sum is replaced by an infinite series."}]}, {"question": "What is Huber regression", "positive_ctxs": [{"text": "In statistics, the Huber loss is a loss function used in robust regression, that is less sensitive to outliers in data than the squared error loss. A variant for classification is also sometimes used."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "There is a lot of flexibility allowed in the choice of loss function. As long as the loss function is monotonic and continuously differentiable, the classifier is always driven toward purer solutions. Zhang (2004) provides a loss function based on least squares, a modified Huber loss function:"}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}]}, {"question": "What is the P value formula", "positive_ctxs": [{"text": "The p-value is calculated using the sampling distribution of the test statistic under the null hypothesis, the sample data, and the type of test being done (lower-tailed test, upper-tailed test, or two-sided test). an upper-tailed test is specified by: p-value = P(TS ts | H 0 is true) = 1 - cdf(ts)"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "of a list of N ordered values (sorted from least to greatest) is the smallest value in the list such that no more than P percent of the data is strictly less than the value and at least P percent of the data is less than or equal to that value. This is obtained by first calculating the ordinal rank and then taking the value from the ordered list that corresponds to that rank. The ordinal rank n is calculated using this formula"}, {"text": "An intrinsic definition of the Gaussian curvature at a point P is the following: imagine an ant which is tied to P with a short thread of length r. It runs around P while the thread is completely stretched and measures the length C(r) of one complete trip around P. If the surface were flat, the ant would find C(r) = 2\u03c0r. On curved surfaces, the formula for C(r) will be different, and the Gaussian curvature K at the point P can be computed by the Bertrand\u2013Diguet\u2013Puiseux theorem as"}, {"text": "is mean income of the population, Pi is the income rank P of person i, with income X, such that the richest person receives a rank of 1 and the poorest a rank of N. This effectively gives higher weight to poorer people in the income distribution, which allows the Gini to meet the Transfer Principle. Note that the Jasso-Deaton formula rescales the coefficient so that its value is 1 if all the"}, {"text": "where the limit is taken as the point Q approaches P on C. The denominator can equally well be taken to be d(P,Q)3. The formula is valid in any dimension. Furthermore, by considering the limit independently on either side of P, this definition of the curvature can sometimes accommodate a singularity at P. The formula follows by verifying it for the osculating circle."}, {"text": "A non-directional formula for the rank-biserial correlation was provided by Wendt, such that the correlation is always positive. The advantage of the Wendt formula is that it can be computed with information that is readily available in published papers. The formula uses only the test value of U from the Mann-Whitney U test, and the sample sizes of the two groups: r = 1 \u2013 (2U)/(n1 n2)."}, {"text": "This is only false when P is true and Q is false. Therefore, we can reduce this proposition to the statement \"False when P and not-Q\" (i.e. \"True when it is not the case that P and not-Q\"):"}, {"text": "The IQR, mean, and standard deviation of a population P can be used in a simple test of whether or not P is normally distributed, or Gaussian. If P is normally distributed, then the standard score of the first quartile, z1, is \u22120.67, and the standard score of the third quartile, z3, is +0.67. Given mean = X and standard deviation = \u03c3 for P, if P is normally distributed, the first quartile"}]}, {"question": "How many peaks does a multimodal distribution have", "positive_ctxs": [{"text": "A unimodal distribution only has one peak in the distribution, a bimodal distribution has two peaks, and a multimodal distribution has three or more peaks. Another way to describe the shape of histograms is by describing whether the data is skewed or symmetric."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "As mentioned earlier, a unimodal distribution with zero value of skewness does not imply that this distribution is symmetric necessarily. However, a symmetric unimodal or multimodal distribution always has zero skewness."}, {"text": "As mentioned earlier, a unimodal distribution with zero value of skewness does not imply that this distribution is symmetric necessarily. However, a symmetric unimodal or multimodal distribution always has zero skewness."}, {"text": "How much does the ball cost?\" many subjects incorrectly answer $0.10. An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do."}, {"text": "to lower fitness) and valleys (regions from which many paths lead uphill). A fitness landscape with many local peaks surrounded by deep valleys is called rugged. If all genotypes have the same replication rate, on the other hand, a fitness landscape is said to be flat."}, {"text": "In statistics, a bimodal distribution is a probability distribution with two different modes, which may also be referred to as a bimodal distribution. These appear as distinct peaks (local maxima) in the probability density function, as shown in Figures 1 and 2. Categorical, continuous, and discrete data can all form bimodal distributions."}, {"text": "What is the sample size. How many units must be collected for the experiment to be generalisable and have enough power?"}, {"text": "If a distribution does not have a finite expected value, as is the case for the Cauchy distribution, then the variance cannot be finite either. However, some distributions may not have a finite variance, despite their expected value being finite. An example is a Pareto distribution whose index"}]}, {"question": "Does Linear Discriminant Analysis work for distributions other than Gaussian", "positive_ctxs": [{"text": "Since this derivation of the LDA direction via least squares does not use a Gaussian assumption for the features, its applicability extends beyond the realm of Gaussian data. However the derivation of the particular intercept or cut-point given in (4.11) does require Gaussian data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Fisher's Linear Discriminant Analysis\u2014an algorithm (different than \"LDA\") that maximizes the ratio of between-class scatter to within-class scatter, without any other assumptions. It is in essence a method of dimensionality reduction for binary classification."}, {"text": "\"Fisher Discriminant Analysis with Kernels\". Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. IEEE Conference on Neural Networks for Signal Processing IX."}, {"text": "\"Fisher Discriminant Analysis with Kernels\". Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. IEEE Conference on Neural Networks for Signal Processing IX."}, {"text": "\"Fisher Discriminant Analysis with Kernels\". Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. IEEE Conference on Neural Networks for Signal Processing IX."}, {"text": "\"Fisher Discriminant Analysis with Kernels\". Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. IEEE Conference on Neural Networks for Signal Processing IX."}, {"text": "\"Fisher Discriminant Analysis with Kernels\". Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. IEEE Conference on Neural Networks for Signal Processing IX."}, {"text": "Gaussian belief propagation is a variant of the belief propagation algorithm when the underlying distributions are Gaussian. The first work analyzing this special model was the seminal work of Weiss and Freeman.The GaBP algorithm solves the following marginalization problem:"}]}, {"question": "What does consistent mean in statistics", "positive_ctxs": [{"text": "Consistency refers to logical and numerical coherence. Context: An estimator is called consistent if it converges in probability to its estimand as sample increases (The International Statistical Institute, \"The Oxford Dictionary of Statistical Terms\", edited by Yadolah Dodge, Oxford University Press, 2003)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "The sigma points are then propagated through the nonlinear functions, from which a new mean and covariance estimate are then formed. The resulting filter depends on how the transformed statistics of the UT are calculated and which set of sigma points are used. It should be remarked that it is always possible to construct new UKFs in a consistent way."}, {"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}]}, {"question": "What is the advantage of the standard deviation over the average deviation", "positive_ctxs": [{"text": "For a normal distribution, the average deviation is somewhat less efficient than the standard deviation as a measure of scale, but this advantage quickly reverses for distributions with heavier tails."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, assume an investor had to choose between two stocks. Stock A over the past 20 years had an average return of 10 percent, with a standard deviation of 20 percentage points (pp) and Stock B, over the same period, had average returns of 12 percent but a higher standard deviation of 30 pp. On the basis of risk and return, an investor may decide that Stock A is the safer choice, because Stock B's additional two percentage points of return is not worth the additional 10 pp standard deviation (greater risk or uncertainty of the expected return)."}, {"text": "For example, assume an investor had to choose between two stocks. Stock A over the past 20 years had an average return of 10 percent, with a standard deviation of 20 percentage points (pp) and Stock B, over the same period, had average returns of 12 percent but a higher standard deviation of 30 pp. On the basis of risk and return, an investor may decide that Stock A is the safer choice, because Stock B's additional two percentage points of return is not worth the additional 10 pp standard deviation (greater risk or uncertainty of the expected return)."}, {"text": "In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while a high standard deviation indicates that the values are spread out over a wider range."}, {"text": "In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while a high standard deviation indicates that the values are spread out over a wider range."}, {"text": "Often, we want some information about the precision of the mean we obtained. We can obtain this by determining the standard deviation of the sampled mean. Assuming statistical independence of the values in the sample, the standard deviation of the mean is related to the standard deviation of the distribution by:"}, {"text": "Often, we want some information about the precision of the mean we obtained. We can obtain this by determining the standard deviation of the sampled mean. Assuming statistical independence of the values in the sample, the standard deviation of the mean is related to the standard deviation of the distribution by:"}, {"text": "The standard deviation of a random variable, sample, statistical population, data set, or probability distribution is the square root of its variance. It is algebraically simpler, though in practice less robust, than the average absolute deviation. A useful property of the standard deviation is that unlike the variance, it is expressed in the same unit as the data."}]}, {"question": "What is the principle of maximum likelihood", "positive_ctxs": [{"text": "The principle of maximum likelihood is a method of obtaining the optimum values of the parameters that define a model. And while doing so, you increase the likelihood of your model reaching the \u201ctrue\u201d model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of a probability distribution by maximizing a likelihood function, so that under the assumed statistical model the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.If the likelihood function is differentiable, the derivative test for determining maxima can be applied."}, {"text": "In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of a probability distribution by maximizing a likelihood function, so that under the assumed statistical model the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.If the likelihood function is differentiable, the derivative test for determining maxima can be applied."}, {"text": "In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of a probability distribution by maximizing a likelihood function, so that under the assumed statistical model the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.If the likelihood function is differentiable, the derivative test for determining maxima can be applied."}, {"text": "The principle of maximum caliber (MaxCal) or maximum path entropy principle, suggested by E. T. Jaynes, can be considered as a generalization of the principle of maximum entropy. It postulates that the most unbiased probability distribution of paths is the one that maximizes their Shannon entropy. This entropy of paths is sometimes called the \"caliber\" of the system, and is given by the path integral"}, {"text": "In the context of parameter estimation, the likelihood function is usually assumed to obey certain conditions, known as regularity conditions. These conditions are assumed in various proofs involving likelihood functions, and need to be verified in each particular application. For maximum likelihood estimation, the existence of a global maximum of the likelihood function is of the utmost importance."}, {"text": "In the context of parameter estimation, the likelihood function is usually assumed to obey certain conditions, known as regularity conditions. These conditions are assumed in various proofs involving likelihood functions, and need to be verified in each particular application. For maximum likelihood estimation, the existence of a global maximum of the likelihood function is of the utmost importance."}, {"text": "The maximum entropy principle makes explicit our freedom in using different forms of prior data. As a special case, a uniform prior probability density (Laplace's principle of indifference, sometimes called the principle of insufficient reason), may be adopted. Thus, the maximum entropy principle is not merely an alternative way to view the usual methods of inference of classical statistics, but represents a significant conceptual generalization of those methods."}]}, {"question": "What is loss value", "positive_ctxs": [{"text": "Loss value implies how poorly or well a model behaves after each iteration of optimization. An accuracy metric is used to measure the algorithm's performance in an interpretable way. The accuracy of a model is usually determined after the model parameters and is calculated in the form of a percentage."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}]}, {"question": "What is the difference between sampling error and margin of error", "positive_ctxs": [{"text": "Sampling error is one of two reasons for the difference between an estimate and the true, but unknown, value of the population parameter. The sampling error for a given sample is unknown but when the sampling is random, the maximum likely size of the sampling error is called the margin of error."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The sampling error is the error caused by observing a sample instead of the whole population. The sampling error is the difference between a sample statistic used to estimate a population parameter and the actual but unknown value of the parameter."}, {"text": "The margin of error is a statistic expressing the amount of random sampling error in the results of a survey. The larger the margin of error, the less confidence one should have that a poll result would reflect the result of a survey of the entire population. The margin of error will be positive whenever a population is incompletely sampled and the outcome measure has positive variance, which is to say, the measure varies."}, {"text": "Polls based on samples of populations are subject to sampling error which reflects the effects of chance and uncertainty in the sampling process. Sampling polls rely on the law of large numbers to measure the opinions of the whole population based only on a subset, and for this purpose the absolute size of the sample is important, but the percentage of the whole population is not important (unless it happens to be close to the sample size). The possible difference between the sample and whole population is often expressed as a margin of error - usually defined as the radius of a 95% confidence interval for a particular statistic."}, {"text": "A caution is that an estimate of a trend is subject to a larger error than an estimate of a level. This is because if one estimates the change, the difference between two numbers X and Y, then one has to contend with errors in both X and Y. A rough guide is that if the change in measurement falls outside the margin of error it is worth attention."}, {"text": "A 3% margin of error means that if the same procedure is used a large number of times, 95% of the time the true population average will be within the sample estimate plus or minus 3%. The margin of error can be reduced by using a larger sample, however if a pollster wishes to reduce the margin of error to 1% they would need a sample of around 10,000 people. In practice, pollsters need to balance the cost of a large sample against the reduction in sampling error and a sample size of around 500\u20131,000 is a typical compromise for political polls."}, {"text": "According to sampling theory, this assumption is reasonable when the sampling fraction is small. The margin of error for a particular sampling method is essentially the same regardless of whether the population of interest is the size of a school, city, state, or country, as long as the sampling fraction is small."}, {"text": "Observational error (or measurement error) is the difference between a measured value of a quantity and its true value. In statistics, an error is not a \"mistake\". Variability is an inherent part of the results of measurements and of the measurement process."}]}, {"question": "What are the types of regression analysis", "positive_ctxs": [{"text": "Below are the different regression techniques: Ridge Regression. Lasso Regression. Polynomial Regression. Bayesian Linear Regression."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In marketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data. Logistic regression or other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps:"}, {"text": "In marketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data. Logistic regression or other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps:"}, {"text": "In marketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data. Logistic regression or other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps:"}, {"text": "In marketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data. Logistic regression or other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps:"}, {"text": "In marketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data. Logistic regression or other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps:"}, {"text": "Datasets consisting of rows of observations and columns of attributes characterizing those observations. Typically used for regression analysis or classification but other types of algorithms can also be used. This section includes datasets that do not fit in the above categories."}, {"text": "Datasets consisting of rows of observations and columns of attributes characterizing those observations. Typically used for regression analysis or classification but other types of algorithms can also be used. This section includes datasets that do not fit in the above categories."}]}, {"question": "How do you find the support vector", "positive_ctxs": [{"text": "Support vectors are the elements of the training set that would change the position of the dividing hyperplane if removed. d+ = the shortest distance to the closest positive point d- = the shortest distance to the closest negative point The margin (gutter) of a separating hyperplane is d+ + d\u2013."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "Do you support the unprovoked military action by the USA?will likely result in data skewed in different directions, although they are both polling about the support for the war. A better way of wording the question could be \"Do you support the current US military action abroad?\" A still more nearly neutral way to put that question is \"What is your view about the current US military action abroad?\""}, {"text": ", this approach defines a general class of algorithms named Tikhonov regularization. For instance, using the hinge loss leads to the support vector machine algorithm, and using the epsilon-insensitive loss leads to support vector regression."}, {"text": "Given a set of data that contains information on medical patients your goal is to find correlation for a disease. Before you can start iterating through the data ensure that you have an understanding of the result, are you looking for patients who have the disease? Are there other diseases that can be the cause?"}]}, {"question": "What is one shot learning in neural networks", "positive_ctxs": [{"text": "Whereas most machine learning based object categorization algorithms require training on hundreds or thousands of samples/images and very large datasets, one-shot learning aims to learn information about object categories from one, or only a few, training samples/images."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Ans and Rousset (1997) also proposed a two-network artificial neural architecture with memory self-refreshing that overcomes catastrophic interference when sequential learning tasks are carried out in distributed networks trained by backpropagation. The principle is to interleave, at the time when new external patterns are learned, those to-be-learned new external patterns with internally generated pseudopatterns, or 'pseudo-memories', that reflect the previously learned information. What mainly distinguishes this model from those that use classical pseudorehearsal in feedforward multilayer networks is a reverberating process that is used for generating pseudopatterns."}, {"text": "Along with rising interest in neural networks beginning in the mid 1980s, interest grew in deep reinforcement learning where a neural network is used to represent policies or value functions. As in such a system, the entire decision making process from sensors to motors in a robot or agent involves a single layered neural network, it is sometimes called end-to-end reinforcement learning. One of the first successful applications of reinforcement learning with neural networks was TD-Gammon, a computer program developed in 1992 for playing backgammon."}, {"text": "Self learning in neural networks was introduced in 1982 along with a neural network capable of self-learning named Crossbar Adaptive Array (CAA). It is a system with only one input, situation s, and only one output, action (or behavior) a. It has neither external advice input nor external reinforcement input from the environment."}, {"text": "Self learning in neural networks was introduced in 1982 along with a neural network capable of self-learning named Crossbar Adaptive Array (CAA). It is a system with only one input, situation s, and only one output, action (or behavior) a. It has neither external advice input nor external reinforcement input from the environment."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}, {"text": "A generative adversarial network (GAN) is a class of machine learning frameworks designed by Ian Goodfellow and his colleagues in 2014. Two neural networks contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss)."}]}, {"question": "Is fine tuning a pre trained model equivalent to transfer learning", "positive_ctxs": [{"text": "Fine tuning is one approach to transfer learning. In Transfer Learning or Domain Adaptation we train the model with a dataset and after we train the same model with another dataset that has a different distribution of classes, or even with other classes than in the training dataset)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Tuning space mapping utilizes a so-called tuning model\u2014constructed invasively from the fine model\u2014as well as a calibration process that translates the adjustment of the optimized tuning model parameters into relevant updates of the design variables. The space mapping concept has been extended to neural-based space mapping for large-signal statistical modeling of nonlinear microwave devices. Space mapping is supported by sound convergence theory and is related to the defect-correction approach.A 2016 state-of-the-art review is devoted to aggressive space mapping."}, {"text": "The first artificial neuron was the Threshold Logic Unit (TLU), or Linear Threshold Unit, first proposed by Warren McCulloch and Walter Pitts in 1943. The model was specifically targeted as a computational model of the \"nerve net\" in the brain. As a transfer function, it employed a threshold, equivalent to using the Heaviside step function."}, {"text": "Domain adaptation is the ability to apply an algorithm trained in one or more \"source domains\" to a different (but related) \"target domain\". Domain adaptation is a subcategory of transfer learning. In domain adaptation, the source and target domains all have the same feature space (but different distributions); in contrast, transfer learning includes cases where the target domain's feature space is different from the source feature space or spaces."}, {"text": "In 1976 Stevo Bozinovski and Ante Fulgosi published a paper explicitly addressing transfer learning in neural networks training. The paper gives a mathematical and geometrical model of transfer learning. In 1981 a report was given on application of transfer learning in training a neural network on a dataset of images representing letters of computer terminals."}, {"text": "The transfer function (activation function) of a neuron is chosen to have a number of properties which either enhance or simplify the network containing the neuron. Crucially, for instance, any multilayer perceptron using a linear transfer function has an equivalent single-layer network; a non-linear function is therefore necessary to gain the advantages of a multi-layer network.Below, u refers in all cases to the weighted sum of all the inputs to the neuron, i.e."}, {"text": "Step-based learning schedules changes the learning rate according to some pre defined steps. The decay application formula is here defined as:"}, {"text": "Step-based learning schedules changes the learning rate according to some pre defined steps. The decay application formula is here defined as:"}]}, {"question": "What is ResNet neural network", "positive_ctxs": [{"text": "A residual neural network (ResNet) is an artificial neural network (ANN) of a kind that builds on constructs known from pyramidal cells in the cerebral cortex. Residual neural networks do this by utilizing skip connections, or shortcuts to jump over some layers."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A residual neural network (ResNet) is an artificial neural network (ANN) of a kind that builds on constructs known from pyramidal cells in the cerebral cortex. Residual neural networks do this by utilizing skip connections, or shortcuts to jump over some layers. Typical ResNet models are implemented with double- or triple- layer skips that contain nonlinearities (ReLU) and batch normalization in between."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "A feedforward neural network is an artificial neural network wherein connections between the nodes do not form a cycle. As such, it is different from its descendant: recurrent neural networks."}, {"text": "A feedforward neural network is an artificial neural network wherein connections between the nodes do not form a cycle. As such, it is different from its descendant: recurrent neural networks."}, {"text": "A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus a neural network is either a biological neural network, made up of real biological neurons, or an artificial neural network, for solving artificial intelligence (AI) problems. The connections of the biological neuron are modeled as weights."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}, {"text": "proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator."}]}, {"question": "What is the difference between an optimization problem and a machine learning problem", "positive_ctxs": [{"text": "Optimization falls in this category \u2014 given an optimization problem, you can, in principle, find a solution to the problem, without any ambiguity whatsoever. Machine learning, on the other hand, falls in the domain of engineering. Problems in engineering are often not mathematically well-defined."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}]}, {"question": "What is the difference between class limits and class boundaries in statistics", "positive_ctxs": [{"text": "Class limits specify the span of data values that fall within a class. Class boundaries are possible data values. Class boundaries are not possible data values."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Decide the individual class limits and select a suitable starting point of the first class which is arbitrary; it may be less than or equal to the minimum value. Usually it is started before the minimum value in such a way that the midpoint (the average of lower and upper class limits of the first class) is properly placed."}, {"text": "Decide the individual class limits and select a suitable starting point of the first class which is arbitrary; it may be less than or equal to the minimum value. Usually it is started before the minimum value in such a way that the midpoint (the average of lower and upper class limits of the first class) is properly placed."}, {"text": "Decision boundaries are not always clear cut. That is, the transition from one class in the feature space to another is not discontinuous, but gradual. This effect is common in fuzzy logic based classification algorithms, where membership in one class or another is ambiguous."}, {"text": "The underlying issue is that there is a class imbalance between the positive class and the negative class. Prior probabilities for these classes need to be accounted for in error analysis. Precision and recall help, but precision too can be biased by very unbalanced class priors in the test sets."}, {"text": "Bayesian statistics has its origin in Greek philosophy where a distinction was already made between the 'a priori' and the 'a posteriori' knowledge. Later Kant defined his distinction between what is a priori known \u2013 before observation \u2013 and the empirical knowledge gained from observations. In a Bayesian pattern classifier, the class probabilities"}, {"text": "Bayesian statistics has its origin in Greek philosophy where a distinction was already made between the 'a priori' and the 'a posteriori' knowledge. Later Kant defined his distinction between what is a priori known \u2013 before observation \u2013 and the empirical knowledge gained from observations. In a Bayesian pattern classifier, the class probabilities"}, {"text": "The Kolmogorov structure function of an individual data string expresses the relation between the complexity level constraint on a model class and the least log-cardinality of a model in the class containing the data. The structure function determines all stochastic properties of the individual data string: for every constrained model class it determines the individual best-fitting model in the class irrespective of whether the true model is in the model class considered or not. In the classical case we talk about a set of data with a probability distribution, and the properties are those of the expectations."}]}, {"question": "What is a federated learning model", "positive_ctxs": [{"text": "Federated Learning is a machine learning setting where the goal is to train a high-quality centralized model with training data distributed over a large number of clients each with unreliable and relatively slow network connections."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The main difference between federated learning and distributed learning lies in the assumptions made on the properties of the local datasets, as distributed learning originally aims at parallelizing computing power where federated learning originally aims at training on heterogeneous datasets. While distributed learning also aims at training a single model on multiple servers, a common underlying assumption is that the local datasets are identically distributed (i.i.d.) and roughly have the same size."}, {"text": "The main difference between federated learning and distributed learning lies in the assumptions made on the properties of the local datasets, as distributed learning originally aims at parallelizing computing power where federated learning originally aims at training on heterogeneous datasets. While distributed learning also aims at training a single model on multiple servers, a common underlying assumption is that the local datasets are identically distributed (i.i.d.) and roughly have the same size."}, {"text": "To ensure good task performance of a final, central machine learning model, federated learning relies on an iterative process broken up into an atomic set of client-server interactions known as a federated learning round. Each round of this process consists in transmitting the current global model state to participating nodes, training local models on these local nodes to produce a set of potential model updates at each node, and then aggregating and processing these local updates into a single global update and applying it to the global model.In the methodology below, a central server is used for aggregation, while local nodes perform local training depending on the central server's orders. However, other strategies lead to the same results without central servers, in a peer-to-peer approach, using gossip or consensus methodologies.Assuming a federated round composed by one iteration of the learning process, the learning procedure can be summarized as follows:"}, {"text": "To ensure good task performance of a final, central machine learning model, federated learning relies on an iterative process broken up into an atomic set of client-server interactions known as a federated learning round. Each round of this process consists in transmitting the current global model state to participating nodes, training local models on these local nodes to produce a set of potential model updates at each node, and then aggregating and processing these local updates into a single global update and applying it to the global model.In the methodology below, a central server is used for aggregation, while local nodes perform local training depending on the central server's orders. However, other strategies lead to the same results without central servers, in a peer-to-peer approach, using gossip or consensus methodologies.Assuming a federated round composed by one iteration of the learning process, the learning procedure can be summarized as follows:"}, {"text": "In the centralized federated learning setting, a central server is used to orchestrate the different steps of the algorithms and coordinate all the participating nodes during the learning process. The server is responsible for the nodes selection at the beginning of the training process and for the aggregation of the received model updates. Since all the selected nodes have to send updates to a single entity, the server may become a bottleneck of the system."}, {"text": "In the centralized federated learning setting, a central server is used to orchestrate the different steps of the algorithms and coordinate all the participating nodes during the learning process. The server is responsible for the nodes selection at the beginning of the training process and for the aggregation of the received model updates. Since all the selected nodes have to send updates to a single entity, the server may become a bottleneck of the system."}, {"text": "The generated model delivers insights based on the global patterns of nodes. However, if a participating node wishes to learn from global patterns but also adapt outcomes to its peculiar status, the federated learning methodology can be adapted to generate two models at once in a multi-task learning framework. In addition, clustering techniques may be applied to aggregate nodes that share some similarities after the learning process is completed."}]}, {"question": "What does neural network convergence mean", "positive_ctxs": [{"text": "In the context of conventional artificial neural networks convergence describes a progression towards a network state where the network has learned to properly respond to a set of training patterns within some margin of error."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "When Xn converges in r-th mean to X for r = 2, we say that Xn converges in mean square (or in quadratic mean) to X.Convergence in the r-th mean, for r \u2265 1, implies convergence in probability (by Markov's inequality). Furthermore, if r > s \u2265 1, convergence in r-th mean implies convergence in s-th mean. Hence, convergence in mean square implies convergence in mean."}, {"text": "After sufficiently many adaptation steps the feature vectors cover the data space with minimum representation error.The adaptation step of the neural gas can be interpreted as gradient descent on a cost function. By adapting not only the closest feature vector but all of them with a step size decreasing with increasing distance order, compared to (online) k-means clustering a much more robust convergence of the algorithm can be achieved. The neural gas model does not delete a node and also does not create new nodes."}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Almost sure convergence is also called strong convergence of random variables. This version is called the strong law because random variables which converge strongly (almost surely) are guaranteed to converge weakly (in probability). However the weak law is known to hold in certain conditions where the strong law does not hold and then the convergence is only weak (in probability)."}, {"text": "A feedforward neural network is an artificial neural network wherein connections between the nodes do not form a cycle. As such, it is different from its descendant: recurrent neural networks."}, {"text": "A feedforward neural network is an artificial neural network wherein connections between the nodes do not form a cycle. As such, it is different from its descendant: recurrent neural networks."}]}, {"question": "Why is the mean useful in statistics", "positive_ctxs": [{"text": "The mean is an important measure because it incorporates the score from every subject in the research study. The required steps for its calculation are: count the total number of cases\u2014referred in statistics as n; add up all the scores and divide by the total number of cases."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "One type of sample mean is the mean of an indicator variable, which takes on the value 1 for true and the value 0 for false. The mean of such a variable is equal to the proportion that has the variable equal to one (both in the population and in any sample). This is a useful property of indicator variables, especially for hypothesis testing."}, {"text": "One type of sample mean is the mean of an indicator variable, which takes on the value 1 for true and the value 0 for false. The mean of such a variable is equal to the proportion that has the variable equal to one (both in the population and in any sample). This is a useful property of indicator variables, especially for hypothesis testing."}, {"text": "Hypothesis testing provides a means of finding test statistics used in significance testing. The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used in sample size determination. The two methods remain philosophically distinct."}, {"text": "Hypothesis testing provides a means of finding test statistics used in significance testing. The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used in sample size determination. The two methods remain philosophically distinct."}, {"text": "Hypothesis testing provides a means of finding test statistics used in significance testing. The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used in sample size determination. The two methods remain philosophically distinct."}, {"text": "Hypothesis testing provides a means of finding test statistics used in significance testing. The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used in sample size determination. The two methods remain philosophically distinct."}, {"text": "Hypothesis testing provides a means of finding test statistics used in significance testing. The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used in sample size determination. The two methods remain philosophically distinct."}]}, {"question": "How do you interpret a positively skewed distribution", "positive_ctxs": [{"text": "Interpreting. If skewness is positive, the data are positively skewed or skewed right, meaning that the right tail of the distribution is longer than the left. If skewness is negative, the data are negatively skewed or skewed left, meaning that the left tail is longer."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Similarly, we can make the sequence positively skewed by adding a value far above the mean, which is probably a positive outlier, e.g. (49, 50, 51, 60), where the mean is 52.5, and the median is 50.5."}, {"text": "Similarly, we can make the sequence positively skewed by adding a value far above the mean, which is probably a positive outlier, e.g. (49, 50, 51, 60), where the mean is 52.5, and the median is 50.5."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "bigger than x, it does not necessarily mean you have made it plausible that it is smaller or equal than x; alternatively you may just have done a lousy measurement with low accuracy. Confirming the null hypothesis two-sided would amount to positively proving it is bigger or equal than 0 AND to positively proving it is smaller or equal than 0; this is something for which you need infinite accuracy as well as exactly zero effect neither of which normally are realistic. Also measurements will never indicate a non-zero probability of exactly zero difference.)"}, {"text": "A distribution that is skewed to the right (the tail of the distribution is longer on the right), will have a positive skewness."}, {"text": "negative skew: The left tail is longer; the mass of the distribution is concentrated on the right of the figure. The distribution is said to be left-skewed, left-tailed, or skewed to the left, despite the fact that the curve itself appears to be skewed or leaning to the right; left instead refers to the left tail being drawn out and, often, the mean being skewed to the left of a typical center of the data. A left-skewed distribution usually appears as a right-leaning curve."}]}, {"question": "What is the use of principal component analysis", "positive_ctxs": [{"text": "Principal Component Analysis (PCA) is used to explain the variance-covariance structure of a set of variables through linear combinations. It is often used as a dimensionality-reduction technique."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Multilinear principal component analysis (MPCA) is a multilinear extension of principal component analysis (PCA). MPCA is employed in the analysis of n-way arrays, i.e. a cube or hyper-cube of numbers, also informally referred to as a \"data tensor\"."}, {"text": "Sparse principal component analysis (sparse PCA) is a specialised technique used in statistical analysis and, in particular, in the analysis of multivariate data sets. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by introducing sparsity structures to the input variables."}, {"text": "\"mean centering\") is necessary for performing classical PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component might instead correspond more or less to the mean of the data. A mean of zero is needed for finding a basis that minimizes the mean square error of the approximation of the data.Mean-centering is unnecessary if performing a principal components analysis on a correlation matrix, as the data are already centered after calculating correlations."}, {"text": "\"mean centering\") is necessary for performing classical PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component might instead correspond more or less to the mean of the data. A mean of zero is needed for finding a basis that minimizes the mean square error of the approximation of the data.Mean-centering is unnecessary if performing a principal components analysis on a correlation matrix, as the data are already centered after calculating correlations."}, {"text": "\"mean centering\") is necessary for performing classical PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component might instead correspond more or less to the mean of the data. A mean of zero is needed for finding a basis that minimizes the mean square error of the approximation of the data.Mean-centering is unnecessary if performing a principal components analysis on a correlation matrix, as the data are already centered after calculating correlations."}, {"text": "\"mean centering\") is necessary for performing classical PCA to ensure that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component might instead correspond more or less to the mean of the data. A mean of zero is needed for finding a basis that minimizes the mean square error of the approximation of the data.Mean-centering is unnecessary if performing a principal components analysis on a correlation matrix, as the data are already centered after calculating correlations."}, {"text": "As in factor analysis or principal component analysis, the first axis is the most important dimension, the second axis the second most important, and so on, in terms of the amount of variance accounted for. The number of axes to be retained for analysis is determined by calculating modified eigenvalues."}]}, {"question": "What is Y hat in regression", "positive_ctxs": [{"text": "Y hat (written \u0177 ) is the predicted value of y (the dependent variable) in a regression equation. It can also be considered to be the average value of the response variable. The equation is calculated during regression analysis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Logistic regression is one way to generalize the odds ratio beyond two binary variables. Suppose we have a binary response variable Y and a binary predictor variable X, and in addition we have other predictor variables Z1, ..., Zp that may or may not be binary. If we use multiple logistic regression to regress Y on X, Z1, ..., Zp, then the estimated coefficient"}, {"text": "Convexity can be extended for a totally ordered set X endowed with the order topology.Let Y \u2286 X. The subspace Y is a convex set if for each pair of points a, b in Y such that a \u2264 b, the interval [a, b] = {x \u2208 X | a \u2264 x \u2264 b} is contained in Y. That is, Y is convex if and only if for all a, b in Y, a \u2264 b implies [a, b] \u2286 Y."}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}, {"text": "A popular criterion is G-optimality, which seeks to minimize the maximum entry in the diagonal of the hat matrix X(X'X)\u22121X'. This has the effect of minimizing the maximum variance of the predicted values."}, {"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}]}, {"question": "How do you analyze motion", "positive_ctxs": [{"text": "15:3248:19Suggested clip \u00b7 37 secondsMotion 5 | How to Use Motion Tracking, Analyze Motion, and Match YouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "A great deal of analysis over human motion is needed because human movement is very complex. MIT and the University of Twente are both working to analyze these movements. They are doing this through a combination of computer models, camera systems, and electromyograms."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "But sometimes, ethical and/or methological restrictions prevent you from conducting an experiment (e.g. how does isolation influence a child's cognitive functioning?). Then you can still do research, but it is not causal, it is correlational."}]}, {"question": "What is parameter with example", "positive_ctxs": [{"text": "A parameter is any summary number, like an average or percentage, that describes the entire population. The population mean (the greek letter \"mu\") and the population proportion p are two different population parameters. For example: The population comprises all likely American voters, and the parameter is p."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}]}, {"question": "What is TP TN FP FN", "positive_ctxs": [{"text": "FP. N. FN. TN. where: P = Positive; N = Negative; TP = True Positive; FP = False Positive; TN = True Negative; FN = False Negative."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In this equation, TP is the number of true positives, TN the number of true negatives, FP the number of false positives and FN the number of false negatives. If any of the four sums in the denominator is zero, the denominator can be arbitrarily set to one; this results in a Matthews correlation coefficient of zero, which can be shown to be the correct limiting value."}, {"text": "In the example above, the MCC score would be undefined (since TN and FN would be 0, therefore the denominator of Equation 3 would be 0). By checking this value, instead of accuracy and F1 score, you would then be able to notice that your classifier is going in the wrong direction, and you would become aware that there are issues you ought to solve before proceeding."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "A similar situation exists between the functional classes FP and #P. By a generalization of Ladner's theorem, there are also problems in neither FP nor #P-complete as long as FP \u2260 #P. As in the decision case, a problem in the #CSP is defined by a set of relations. Each problem takes a Boolean formula as input and the task is to compute the number of satisfying assignments. This can be further generalized by using larger domain sizes and attaching a weight to each satisfying assignment and computing the sum of these weights."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}]}, {"question": "What is Epsilon greedy policy", "positive_ctxs": [{"text": "Epsilon greedy policy is a way of selecting random actions with uniform distribution from a set of available actions. This policy selects random actions in twice if the value of epsilon is 0.2. Consider a following example, There is a robot with capability to move in 4 direction. Up,down,left,right."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "In particular, the odd greedy expansion of a fraction x/y is formed by a greedy algorithm of this type in which all denominators are constrained to be odd numbers; it is known that, whenever y is odd, there is a finite Egyptian fraction expansion in which all denominators are odd, but it is not known whether the odd greedy expansion is always finite."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "A policy that achieves these optimal values in each state is called optimal. Clearly, a policy that is optimal in this strong sense is also optimal in the sense that it maximizes the expected return"}, {"text": "A policy that achieves these optimal values in each state is called optimal. Clearly, a policy that is optimal in this strong sense is also optimal in the sense that it maximizes the expected return"}, {"text": "The most popular use of greedy algorithms is for finding the minimal spanning tree where finding the optimal solution is possible with this method. Huffman Tree, Kruskal, Prim, Sollin are greedy algorithms that can solve this optimization problem."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}]}, {"question": "What are the positive effects of robots", "positive_ctxs": [{"text": "7 Advantages of Robots in the WorkplaceSafety. Safety is the most obvious advantage of utilizing robotics. Speed. Robots don't get distracted or need to take breaks. Consistency. Robots never need to divide their attention between a multitude of things. Perfection. Robots will always deliver quality. Happier Employees. Job Creation. Productivity."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "After the industrial robots were improved rapidly for over a hundred years since the Industrial Revolution, people started to consider the use of robots at home.One of the earliest domestic robots is called \u201cHERO\u201d, which was sold during the 1980s. \u201cOf all the educational and personal robots created during the 1980s the Heathkit HERO robots were by far the most successful and most popular.\u201d There were four types of Hero robots created by Heathkit. The first model is called HERO 1."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "The long-term economic effects of AI are uncertain. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit, if productivity gains are redistributed. A 2017 study by PricewaterhouseCoopers sees the People\u2019s Republic of China gaining economically the most out of AI with 26,1% of GDP until 2030."}, {"text": "The long-term economic effects of AI are uncertain. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit, if productivity gains are redistributed. A 2017 study by PricewaterhouseCoopers sees the People\u2019s Republic of China gaining economically the most out of AI with 26,1% of GDP until 2030."}, {"text": "The long-term economic effects of AI are uncertain. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit, if productivity gains are redistributed. A 2017 study by PricewaterhouseCoopers sees the People\u2019s Republic of China gaining economically the most out of AI with 26,1% of GDP until 2030."}, {"text": "Petman is one of the first and most advanced humanoid robots developed at Boston Dynamics. Some of the humanoid robots such as Honda Asimo are over actuated. On the other hand, there are some humanoid robots like the robot developed at Cornell University that do not have any actuators and walk passively descending a shallow slope."}]}, {"question": "What is special about a least squares regression line", "positive_ctxs": [{"text": "Given any collection of pairs of numbers (except when all the x-values are the same) and the corresponding scatter diagram, there always exists exactly one straight line that fits the data better than any other, in the sense of minimizing the sum of the squared errors. It is called the least squares regression line."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve."}, {"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "The SUR model is usually estimated using the feasible generalized least squares (FGLS) method. This is a two-step method where in the first step we run ordinary least squares regression for (1). The residuals from this regression are used to estimate the elements of matrix"}]}, {"question": "What is the difference between regression and structural equation modeling", "positive_ctxs": [{"text": "There are two main differences between regression and structural equation modelling. The first is that SEM allows us to develop complex path models with direct and indirect effects. This allows us to more accurately model causal mechanisms we are interested in. The second key difference is to do with measurement."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Confirmatory factor analysis (CFA) is a more complex approach that tests the hypothesis that the items are associated with specific factors. CFA uses structural equation modeling to test a measurement model whereby loading on the factors allows for evaluation of relationships between observed variables and unobserved variables. Structural equation modeling approaches can accommodate measurement error, and are less restrictive than least-squares estimation."}, {"text": "Path coefficients are standardized versions of linear regression weights which can be used in examining the possible causal linkage between statistical variables in the structural equation modeling approach. The standardization involves multiplying the ordinary regression coefficient by the standard deviations of the corresponding explanatory variable: these can then be compared to assess the relative effects of the variables within the fitted regression model. The idea of standardization can be extended to apply to partial regression coefficients."}, {"text": "An iterative algorithm solves the structural equation model by estimating the latent variables by using the measurement and structural model in alternating steps, hence the procedure's name, partial. The measurement model estimates the latent variables as a weighted sum of its manifest variables. The structural model estimates the latent variables by means of simple or multiple linear regression between the latent variables estimated by the measurement model."}, {"text": "Structural equation models are often used to assess unobservable 'latent' constructs. They often invoke a measurement model that defines latent variables using one or more observed variables, and a structural model that imputes relationships between latent variables. The links between constructs of a structural equation model may be estimated with independent regression equations or through more involved approaches such as those employed in LISREL.Use of SEM is commonly justified in the social sciences because of its ability to impute relationships between unobserved constructs (latent variables) and observable variables."}, {"text": "In addition, by an adjustment PLS-PM is capable of consistently estimating certain parameters of common factor models as well, through an approach called consistent PLS (PLSc). A further related development is factor-based PLS-PM (PLSF), a variation of which employs PLSc as a basis for the estimation of the factors in common factor models; this method significantly increases the number of common factor model parameters that can be estimated, effectively bridging the gap between classic PLS and covariance\u2010based structural equation modeling. Furthermore, PLS-PM can be used for out-sample prediction purposes, and can be employed as an estimator in confirmatory composite analysis.The PLS structural equation model is composed of two sub-models: the measurement model and structural model."}, {"text": "It can be evaluated through different forms of factor analysis, structural equation modeling (SEM), and other statistical evaluations. It is important to note that a single study does not prove construct validity. Rather it is a continuous process of evaluation, reevaluation, refinement, and development."}, {"text": "It can be evaluated through different forms of factor analysis, structural equation modeling (SEM), and other statistical evaluations. It is important to note that a single study does not prove construct validity. Rather it is a continuous process of evaluation, reevaluation, refinement, and development."}]}, {"question": "What is unit and standard unit", "positive_ctxs": [{"text": "A unit of measurement is some specific quantity that has been chosen as the standard against which other measurements of the same kind are made. The term standard refers to the physical object on which the unit of measurement is based."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "in the exponent ensures that the distribution has unit variance (i.e., variance being equal to one), and therefore also unit standard deviation. This function is symmetric around"}, {"text": "in the exponent ensures that the distribution has unit variance (i.e., variance being equal to one), and therefore also unit standard deviation. This function is symmetric around"}, {"text": "in the exponent ensures that the distribution has unit variance (i.e., variance being equal to one), and therefore also unit standard deviation. This function is symmetric around"}, {"text": "in the exponent ensures that the distribution has unit variance (i.e., variance being equal to one), and therefore also unit standard deviation. This function is symmetric around"}, {"text": "in the exponent ensures that the distribution has unit variance (i.e., variance being equal to one), and therefore also unit standard deviation. This function is symmetric around"}, {"text": "This provides power to the robot and allows it to move itself and its cutting blades. There is also a control unit which helps the mower move. This unit also contains a memory unit which records and memorizes its operation programming."}, {"text": "Reactive plans can be expressed also by connectionist networks like artificial neural networks or free-flow hierarchies. The basic representational unit is a unit with several input links that feed the unit with \"an abstract activity\" and output links that propagate the activity to following units. Each unit itself works as the activity transducer."}]}, {"question": "What is the application of eigenvalues and eigenvectors", "positive_ctxs": [{"text": "Eigenvalues and eigenvectors allow us to \"reduce\" a linear operation to separate, simpler, problems. For example, if a stress is applied to a \"plastic\" solid, the deformation can be dissected into \"principle directions\"- those directions in which the deformation is greatest."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A matrix that is not diagonalizable is said to be defective. For defective matrices, the notion of eigenvectors generalizes to generalized eigenvectors and the diagonal matrix of eigenvalues generalizes to the Jordan normal form. Over an algebraically closed field, any matrix A has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition into generalized eigenspaces."}, {"text": "The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues of A. These eigenvalues correspond to the eigenvectors"}, {"text": "Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the QR algorithm was designed in 1961. Combining the Householder transformation with the LU decomposition results in an algorithm with better convergence than the QR algorithm. For large Hermitian sparse matrices, the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors, among several other possibilities.Most numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by-product of the computation, although sometimes implementors choose to discard the eigenvector information as soon as it is no longer needed."}, {"text": "The eigenspaces of T always form a direct sum. As a consequence, eigenvectors of different eigenvalues are always linearly independent. Therefore, the sum of the dimensions of the eigenspaces cannot exceed the dimension n of the vector space on which T operates, and there cannot be more than n distinct eigenvalues.Any subspace spanned by eigenvectors of T is an invariant subspace of T, and the restriction of T to such a subspace is diagonalizable."}, {"text": "is of rank C \u2212 1 at most). These eigenvectors are primarily used in feature reduction, as in PCA. The eigenvectors corresponding to the smaller eigenvalues will tend to be very sensitive to the exact choice of training data, and it is often necessary to use regularisation as described in the next section."}, {"text": "is of rank C \u2212 1 at most). These eigenvectors are primarily used in feature reduction, as in PCA. The eigenvectors corresponding to the smaller eigenvalues will tend to be very sensitive to the exact choice of training data, and it is often necessary to use regularisation as described in the next section."}, {"text": "is of rank C \u2212 1 at most). These eigenvectors are primarily used in feature reduction, as in PCA. The eigenvectors corresponding to the smaller eigenvalues will tend to be very sensitive to the exact choice of training data, and it is often necessary to use regularisation as described in the next section."}]}, {"question": "Is feature scaling required for random forest", "positive_ctxs": [{"text": "Role of Scaling is mostly important in algorithms that are distance based and require Euclidean Distance. Random Forest is a tree-based model and hence does not require feature scaling."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "As part of their construction, random forest predictors naturally lead to a dissimilarity measure among the observations. One can also define a random forest dissimilarity measure between unlabeled data: the idea is to construct a random forest predictor that distinguishes the \u201cobserved\u201d data from suitably generated synthetic data."}, {"text": "They are often relatively inaccurate. Many other predictors perform better with similar data. This can be remedied by replacing a single decision tree with a random forest of decision trees, but a random forest is not as easy to interpret as a single decision tree."}, {"text": "They are often relatively inaccurate. Many other predictors perform better with similar data. This can be remedied by replacing a single decision tree with a random forest of decision trees, but a random forest is not as easy to interpret as a single decision tree."}, {"text": "They are often relatively inaccurate. Many other predictors perform better with similar data. This can be remedied by replacing a single decision tree with a random forest of decision trees, but a random forest is not as easy to interpret as a single decision tree."}, {"text": "They are often relatively inaccurate. Many other predictors perform better with similar data. This can be remedied by replacing a single decision tree with a random forest of decision trees, but a random forest is not as easy to interpret as a single decision tree."}, {"text": "The observed data are the original unlabeled data and the synthetic data are drawn from a reference distribution. A random forest dissimilarity can be attractive because it handles mixed variable types very well, is invariant to monotonic transformations of the input variables, and is robust to outlying observations. The random forest dissimilarity easily deals with a large number of semi-continuous variables due to its intrinsic variable selection; for example, the \"Addcl 1\" random forest dissimilarity weighs the contribution of each variable according to how dependent it is on other variables."}, {"text": "is to fit a random forest to the data. During the fitting process the out-of-bag error for each data point is recorded and averaged over the forest (errors on an independent test set can be substituted if bagging is not used during training)."}]}, {"question": "What is a multivariate data set", "positive_ctxs": [{"text": "2 Multivariate Data. Multivariate data contains, at each sample point, multiple scalar values that represent different simulated or measured quantities."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A tensor is a multilinear transformation that maps a set of vector spaces to another vector space. A data tensor is a collection of multivariate observations organized into a M-way array."}, {"text": "Multivariate normality tests check a given set of data for similarity to the multivariate normal distribution. The null hypothesis is that the data set is similar to the normal distribution, therefore a sufficiently small p-value indicates non-normal data. Multivariate normality tests include the Cox\u2013Small test"}, {"text": "In statistics, a latent class model (LCM) relates a set of observed (usually discrete) multivariate variables to a set of latent variables. It is a type of latent variable model. It is called a latent class model because the latent variable is discrete."}, {"text": "In statistics, a latent class model (LCM) relates a set of observed (usually discrete) multivariate variables to a set of latent variables. It is a type of latent variable model. It is called a latent class model because the latent variable is discrete."}, {"text": "Pre-processing is essential to analyze the multivariate data sets before data mining. The target set is then cleaned. Data cleaning removes the observations containing noise and those with missing data."}, {"text": "Correspondence analysis (CA) or reciprocal averaging is a multivariate statistical technique proposed by Herman Otto Hartley (Hirschfeld) and later developed by Jean-Paul Benz\u00e9cri. It is conceptually similar to principal component analysis, but applies to categorical rather than continuous data. In a similar manner to principal component analysis, it provides a means of displaying or summarising a set of data in two-dimensional graphical form."}, {"text": "Correspondence analysis (CA) or reciprocal averaging is a multivariate statistical technique proposed by Herman Otto Hartley (Hirschfeld) and later developed by Jean-Paul Benz\u00e9cri. It is conceptually similar to principal component analysis, but applies to categorical rather than continuous data. In a similar manner to principal component analysis, it provides a means of displaying or summarising a set of data in two-dimensional graphical form."}]}, {"question": "What is uncertainty in machine learning", "positive_ctxs": [{"text": "Uncertainty is a popular phenomenon in machine learning and a variety of methods to model uncertainty at different levels has been developed. Different types of uncertainty can be observed: (i) Input data are subject to noise, outliers, and errors."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What emerges then is that info-gap theory is yet to explain in what way, if any, it actually attempts to deal with the severity of the uncertainty under consideration. Subsequent sections of this article will address this severity issue and its methodological and practical implications."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "It is very similar to program synthesis, which means a planner generates sourcecode which can be executed by an interpreter.An early example of a conditional planner is \u201cWarplan-C\u201d which was introduced in the mid 1970s. What is the difference between a normal sequence and a complicated plan, which contains if-then-statements? It has to do with uncertainty at runtime of a plan."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of \"interestingness\".Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves \"rules\" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system."}]}, {"question": "What is Bag of Words in image processing", "positive_ctxs": [{"text": "In computer vision, the bag-of-words model (BoW model) sometimes called bag-of-visual-words model can be applied to image classification, by treating image features as words. In document classification, a bag of words is a sparse vector of occurrence counts of words; that is, a sparse histogram over the vocabulary."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What kind of graph is used depends on the application. For example, in natural language processing, linear chain CRFs are popular, which implement sequential dependencies in the predictions. In image processing the graph typically connects locations to nearby and/or similar locations to enforce that they receive similar predictions."}, {"text": "Caltech Large Scale Image Search Toolbox: a Matlab/C++ toolbox implementing Inverted File search for Bag of Words model. It also contains implementations for fast approximate nearest neighbor search using randomized k-d tree, locality-sensitive hashing, and hierarchical k-means."}, {"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}, {"text": "The motion analysis processing can in the simplest case be to detect motion, i.e., find the points in the image where something is moving. More complex types of processing can be to track a specific object in the image over time, to group points that belong to the same rigid object that is moving in the scene, or to determine the magnitude and direction of the motion of every point in the image. The information that is produced is often related to a specific image in the sequence, corresponding to a specific time-point, but then depends also on the neighboring images."}, {"text": "One of the simplest type of motion analysis is to detect image points that refer to moving points in the scene. The typical result of this processing is a binary image where all image points (pixels) that relate to moving points in the scene are set to 1 and all other points are set to 0. This binary image is then further processed, e.g., to remove noise, group neighboring pixels, and label objects."}, {"text": "The goals vary from noise removal to feature abstraction. Filtering image data is a standard process used in almost all image processing systems. Nonlinear filters are the most utilized forms of filter construction."}, {"text": "Discrete Laplace operator is often used in image processing e.g. in edge detection and motion estimation applications. The discrete Laplacian is defined as the sum of the second derivatives Laplace operator#Coordinate expressions and calculated as sum of differences over the nearest neighbours of the central pixel."}]}, {"question": "Does feature selection improve classification accuracy", "positive_ctxs": [{"text": "The main benefit claimed for feature selection, which is the main focus in this manuscript, is that it increases classification accuracy. It is believed that removing non-informative signal can reduce noise, and can increase the contrast between labelled groups."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Moreover, the developing use of propensity score matching to match participants on variables important to the treatment selection process can also improve the accuracy of quasi-experimental results."}, {"text": "Post-pruning (or just pruning) is the most common way of simplifying trees. Here, nodes and subtrees are replaced with leaves to improve complexity. Pruning can not only significantly reduce the size but also improve the classification accuracy of unseen objects."}, {"text": "Hence, high input dimensional typically requires tuning the classifier to have low variance and high bias. In practice, if the engineer can manually remove irrelevant features from the input data, this is likely to improve the accuracy of the learned function. In addition, there are many algorithms for feature selection that seek to identify the relevant features and discard the irrelevant ones."}, {"text": "Hence, high input dimensional typically requires tuning the classifier to have low variance and high bias. In practice, if the engineer can manually remove irrelevant features from the input data, this is likely to improve the accuracy of the learned function. In addition, there are many algorithms for feature selection that seek to identify the relevant features and discard the irrelevant ones."}, {"text": "Embedded methods have been recently proposed that try to combine the advantages of both previous methods. A learning algorithm takes advantage of its own variable selection process and performs feature selection and classification simultaneously, such as the FRMT algorithm."}, {"text": "Embedded methods have been recently proposed that try to combine the advantages of both previous methods. A learning algorithm takes advantage of its own variable selection process and performs feature selection and classification simultaneously, such as the FRMT algorithm."}, {"text": "It is also possible to improve the accuracy of the matching method by hybridizing the feature-based and template-based approaches. Naturally, this requires that the search and template images have features that are apparent enough to support feature matching."}]}, {"question": "In statistics what is the difference between a quartile and a quantile", "positive_ctxs": [{"text": "When used as nouns, quantile means one of the class of values of a variate which divides the members of a batch or sample into equal-sized subgroups of adjacent values or a probability distribution into distributions of equal probability, whereas quartile means any of the three points that divide an ordered"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The IQR of a set of values is calculated as the difference between the upper and lower quartiles, Q3 and Q1. Each quartile is a median calculated as follows."}, {"text": "The IQR of a set of values is calculated as the difference between the upper and lower quartiles, Q3 and Q1. Each quartile is a median calculated as follows."}, {"text": "In descriptive statistics, the interquartile range (IQR), also called the midspread, middle 50%, or H\u2011spread, is a measure of statistical dispersion, being equal to the difference between 75th and 25th percentiles, or between upper and lower quartiles, IQR = Q3 \u2212 Q1. In other words, the IQR is the first quartile subtracted from the third quartile; these quartiles can be clearly seen on a box plot on the data. It is a trimmed estimator, defined as the 25% trimmed range, and is a commonly used robust measure of scale."}, {"text": "In descriptive statistics, the interquartile range (IQR), also called the midspread, middle 50%, or H\u2011spread, is a measure of statistical dispersion, being equal to the difference between 75th and 25th percentiles, or between upper and lower quartiles, IQR = Q3 \u2212 Q1. In other words, the IQR is the first quartile subtracted from the third quartile; these quartiles can be clearly seen on a box plot on the data. It is a trimmed estimator, defined as the 25% trimmed range, and is a commonly used robust measure of scale."}, {"text": "Bayesian statistics has its origin in Greek philosophy where a distinction was already made between the 'a priori' and the 'a posteriori' knowledge. Later Kant defined his distinction between what is a priori known \u2013 before observation \u2013 and the empirical knowledge gained from observations. In a Bayesian pattern classifier, the class probabilities"}, {"text": "Bayesian statistics has its origin in Greek philosophy where a distinction was already made between the 'a priori' and the 'a posteriori' knowledge. Later Kant defined his distinction between what is a priori known \u2013 before observation \u2013 and the empirical knowledge gained from observations. In a Bayesian pattern classifier, the class probabilities"}, {"text": "Closely related to the logit function (and logit model) are the probit function and probit model. The logit and probit are both sigmoid functions with a domain between 0 and 1, which makes them both quantile functions \u2013 i.e., inverses of the cumulative distribution function (CDF) of a probability distribution. In fact, the logit is the quantile function of the logistic distribution, while the probit is the quantile function of the normal distribution."}]}, {"question": "How does residual network work", "positive_ctxs": [{"text": "A residual neural network (ResNet) is an artificial neural network (ANN) of a kind that builds on constructs known from pyramidal cells in the cerebral cortex. Residual neural networks do this by utilizing skip connections, or shortcuts to jump over some layers."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "How much does the ball cost?\" many subjects incorrectly answer $0.10. An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do."}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}]}, {"question": "What is a good T stat", "positive_ctxs": [{"text": "Thus, the t-statistic measures how many standard errors the coefficient is away from zero. Generally, any t-value greater than +2 or less than \u2013 2 is acceptable. The higher the t-value, the greater the confidence we have in the coefficient as a predictor."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the period of oscillation T of a mass m attached to an ideal linear spring with spring constant k suspended in gravity of strength g? That period is the solution for T of some dimensionless equation in the variables T, m, k, and g."}, {"text": "A theory T implies the statement F. As the theory T is simpler than F, abduction says that there is a probability that the theory T is implied by F.The theory T, also called an explanation of the condition F, is an answer to the ubiquitous factual \"why\" question. For example, for the condition F is \"Why do apples fall?\". The answer is a theory T that implies that apples fall;"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "In essence, an eigenvector v of a linear transformation T is a nonzero vector that, when T is applied to it, does not change direction. Applying T to the eigenvector only scales the eigenvector by the scalar value \u03bb, called an eigenvalue. This condition can be written as the equation"}, {"text": "If T is a theory whose objects of discourse can be interpreted as natural numbers, we say T is arithmetically sound if all theorems of T are actually true about the standard mathematical integers. For further information, see \u03c9-consistent theory."}, {"text": "It is common to make decisions under uncertainty. What can be done to make good (or at least the best possible) decisions under conditions of uncertainty? Info-gap robustness analysis evaluates each feasible decision by asking: how much deviation from an estimate of a parameter value, function, or set, is permitted and yet \"guarantee\" acceptable performance?"}, {"text": "The eigenspaces of T always form a direct sum. As a consequence, eigenvectors of different eigenvalues are always linearly independent. Therefore, the sum of the dimensions of the eigenspaces cannot exceed the dimension n of the vector space on which T operates, and there cannot be more than n distinct eigenvalues.Any subspace spanned by eigenvectors of T is an invariant subspace of T, and the restriction of T to such a subspace is diagonalizable."}]}, {"question": "How does facial verification work", "positive_ctxs": [{"text": "A facial recognition system uses biometrics to map facial features from a photograph or video. It compares the information with a database of known faces to find a match. That's because facial recognition has all kinds of commercial applications. It can be used for everything from surveillance to marketing."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "How much does the ball cost?\" many subjects incorrectly answer $0.10. An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do."}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}]}, {"question": "How do you find the joint probability density function", "positive_ctxs": [{"text": "If X takes values in [a, b] and Y takes values in [c, d] then the pair (X, Y ) takes values in the product [a, b] \u00d7 [c, d]. The joint probability density function (joint pdf) of X and Y is a function f(x, y) giving the probability density at (x, y)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The joint probability distribution can be expressed either in terms of a joint cumulative distribution function or in terms of a joint probability density function (in the case of continuous variables) or joint probability mass function (in the case of discrete variables). These in turn can be used to find two other types of distributions: the marginal distribution giving the probabilities for any one of the variables with no reference to any specific ranges of values for the other variables, and the conditional probability distribution giving the probabilities for any subset of the variables conditional on particular values of the remaining variables."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "For continuous random variables X1, ..., Xn, it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the n variables, such that, for any domain D in the n-dimensional space of the values of the variables X1, ..., Xn, the probability that a realisation of the set variables falls inside the domain D is"}, {"text": "For continuous random variables X1, ..., Xn, it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the n variables, such that, for any domain D in the n-dimensional space of the values of the variables X1, ..., Xn, the probability that a realisation of the set variables falls inside the domain D is"}, {"text": "For continuous random variables X1, ..., Xn, it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the n variables, such that, for any domain D in the n-dimensional space of the values of the variables X1, ..., Xn, the probability that a realisation of the set variables falls inside the domain D is"}, {"text": "To see this, consider the joint probability density function of X (X1,...,Xn). Because the observations are independent, the pdf can be written as a product of individual densities"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}]}, {"question": "What problems are suitable for supervised machine learning", "positive_ctxs": [{"text": "Some common types of problems built on top of classification and regression include recommendation and time series prediction respectively. Some popular examples of supervised machine learning algorithms are: Linear regression for regression problems. Random forest for classification and regression problems."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A wide range of supervised learning algorithms are available, each with its strengths and weaknesses. There is no single learning algorithm that works best on all supervised learning problems (see the No free lunch theorem)."}, {"text": "A wide range of supervised learning algorithms are available, each with its strengths and weaknesses. There is no single learning algorithm that works best on all supervised learning problems (see the No free lunch theorem)."}, {"text": "Depending on the type and variation in training data, machine learning can be roughly categorized into three frameworks: supervised learning, unsupervised learning, and reinforcement learning. Multiple instance learning (MIL) falls under the supervised learning framework, where every training instance has a label, either discrete or real valued. MIL deals with problems with incomplete knowledge of labels in training sets."}, {"text": "Additionally, for the specific purpose of classification, supervised alternatives have been developed to account for the class label of a document. Lastly, binary (presence/absence or 1/0) weighting is used in place of frequencies for some problems (e.g., this option is implemented in the WEKA machine learning software system)."}, {"text": "Additionally, for the specific purpose of classification, supervised alternatives have been developed to account for the class label of a document. Lastly, binary (presence/absence or 1/0) weighting is used in place of frequencies for some problems (e.g., this option is implemented in the WEKA machine learning software system)."}, {"text": "Additionally, for the specific purpose of classification, supervised alternatives have been developed to account for the class label of a document. Lastly, binary (presence/absence or 1/0) weighting is used in place of frequencies for some problems (e.g., this option is implemented in the WEKA machine learning software system)."}, {"text": "Additionally, for the specific purpose of classification, supervised alternatives have been developed to account for the class label of a document. Lastly, binary (presence/absence or 1/0) weighting is used in place of frequencies for some problems (e.g., this option is implemented in the WEKA machine learning software system)."}]}, {"question": "What is an activation function in machine learning", "positive_ctxs": [{"text": "In artificial neural networks, the activation function of a node defines the output of that node given an input or set of inputs. A standard integrated circuit can be seen as a digital network of activation functions that can be \"ON\" (1) or \"OFF\" (0), depending on input."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Logistic regression and other log-linear models are also commonly used in machine learning. A generalisation of the logistic function to multiple inputs is the softmax activation function, used in multinomial logistic regression."}, {"text": "Logistic regression and other log-linear models are also commonly used in machine learning. A generalisation of the logistic function to multiple inputs is the softmax activation function, used in multinomial logistic regression."}, {"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}, {"text": "Below is an example of a learning algorithm for a single-layer perceptron. For multilayer perceptrons, where a hidden layer exists, more sophisticated algorithms such as backpropagation must be used. If the activation function or the underlying process being modeled by the perceptron is nonlinear, alternative learning algorithms such as the delta rule can be used as long as the activation function is differentiable."}, {"text": "Below is an example of a learning algorithm for a single-layer perceptron. For multilayer perceptrons, where a hidden layer exists, more sophisticated algorithms such as backpropagation must be used. If the activation function or the underlying process being modeled by the perceptron is nonlinear, alternative learning algorithms such as the delta rule can be used as long as the activation function is differentiable."}, {"text": "Studies of recurrent febrile seizures have shown that seizures resulted in impaired learning and memory but also disrupted signaling that normally results in activation of cAMP response element binding factor (CREB), a transcription factor. For rats tested in the inhibitory avoidance learning paradigm, normally an activation of CREB occurs by phosphorylation at Ser133. This activation is impaired following recurrent febrile seizures."}, {"text": ".In biologically inspired neural networks, the activation function is usually an abstraction representing the rate of action potential firing in the cell. In its simplest form, this function is binary\u2014that is, either the neuron is firing or not."}]}, {"question": "What is the Bayesian probability of an error", "positive_ctxs": [{"text": "In statistical classification, Bayes error rate is the lowest possible error rate for any classifier of a random outcome (into, for example, one of two categories) and is analogous to the irreducible error. A number of approaches to the estimation of the Bayes error rate exist."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Bayesian probability is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.The Bayesian interpretation of probability can be seen as an extension of propositional logic that enables reasoning with hypotheses; that is, with propositions whose truth or falsity is unknown. In the Bayesian view, a probability is assigned to a hypothesis, whereas under frequentist inference, a hypothesis is typically tested without being assigned a probability."}, {"text": "Suppose there are n people at a party, each of whom brought an umbrella. At the end of the party everyone picks an umbrella out of the stack of umbrellas and leaves. What is the probability that no one left with his/her own umbrella?"}, {"text": "The standard approach is to test a null hypothesis against an alternative hypothesis. A critical region is the set of values of the estimator that leads to refuting the null hypothesis. The probability of type I error is therefore the probability that the estimator belongs to the critical region given that null hypothesis is true (statistical significance) and the probability of type II error is the probability that the estimator doesn't belong to the critical region given that the alternative hypothesis is true."}, {"text": "The standard approach is to test a null hypothesis against an alternative hypothesis. A critical region is the set of values of the estimator that leads to refuting the null hypothesis. The probability of type I error is therefore the probability that the estimator belongs to the critical region given that null hypothesis is true (statistical significance) and the probability of type II error is the probability that the estimator doesn't belong to the critical region given that the alternative hypothesis is true."}, {"text": "The standard approach is to test a null hypothesis against an alternative hypothesis. A critical region is the set of values of the estimator that leads to refuting the null hypothesis. The probability of type I error is therefore the probability that the estimator belongs to the critical region given that null hypothesis is true (statistical significance) and the probability of type II error is the probability that the estimator doesn't belong to the critical region given that the alternative hypothesis is true."}, {"text": "The standard approach is to test a null hypothesis against an alternative hypothesis. A critical region is the set of values of the estimator that leads to refuting the null hypothesis. The probability of type I error is therefore the probability that the estimator belongs to the critical region given that null hypothesis is true (statistical significance) and the probability of type II error is the probability that the estimator doesn't belong to the critical region given that the alternative hypothesis is true."}]}, {"question": "What is a cross sectional study in statistics", "positive_ctxs": [{"text": "A cross-sectional study involves looking at data from a population at one specific point in time. Cross-sectional studies are observational in nature and are known as descriptive research, not causal or relational, meaning that you can't use them to determine the cause of something, such as a disease."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Cross-sectional data, or a cross section of a study population, in statistics and econometrics is a type of data collected by observing many subjects (such as individuals, firms, countries, or regions) at the one point or period of time. The analysis might also have no regard to differences in time. Analysis of cross-sectional data usually consists of comparing the differences among selected subjects."}, {"text": "What constitutes narrow or wide limits of agreement or large or small bias is a matter of a practical assessment in each case."}]}, {"question": "Is Random Forest a decision tree", "positive_ctxs": [{"text": "A random forest is simply a collection of decision trees whose results are aggregated into one final result. Their ability to limit overfitting without substantially increasing error due to bias is why they are such powerful models. One way Random Forests reduce variance is by training on different samples of the data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}]}, {"question": "Is median filter a low pass filter", "positive_ctxs": [{"text": "Low Pass filtering: It is also known as the smoothing filter. It removes the high-frequency content from the image. Median Filtering: It is also known as nonlinear filtering. It is used to eliminate salt and pepper noise."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, if an image contains a low amount of noise but with relatively high magnitude, then a median filter may be more appropriate."}, {"text": "An alternative to the RTS algorithm is the modified Bryson\u2013Frazier (MBF) fixed interval smoother developed by Bierman. This also uses a backward pass that processes data saved from the Kalman filter forward pass. The equations for the backward pass involve the recursive"}, {"text": "Decimate by a factor of MStep 1 requires a lowpass filter after increasing (expanding) the data rate, and step 2 requires a lowpass filter before decimation. Therefore, both operations can be accomplished by a single filter with the lower of the two cutoff frequencies. For the M > L case, the anti-aliasing filter cutoff,"}, {"text": "With a low gain, the filter follows the model predictions more closely. At the extremes, a high gain close to one will result in a more jumpy estimated trajectory, while a low gain close to zero will smooth out noise but decrease the responsiveness."}, {"text": "A popular circuit implementing a second order active R-C filter is the Sallen-Key design, whose schematic diagram is shown here. This topology can be adapted to produce low-pass, band-pass, and high pass filters."}, {"text": "Report filter is used to apply a filter to an entire table. For example, if the \"Color of Item\" field is dragged to this area, then the table constructed will have a report filter inserted above the table. This report filter will have drop-down options (Black, Red, and White in the example above)."}, {"text": "A filter implemented in a computer program (or a so-called digital signal processor) is a discrete-time system; a different (but parallel) set of mathematical concepts defines the behavior of such systems. Although a digital filter can be an IIR filter if the algorithm implementing it includes feedback, it is also possible to easily implement a filter whose impulse truly goes to zero after N time steps; this is called a finite impulse response (FIR) filter."}]}, {"question": "What is linear regression used for", "positive_ctxs": [{"text": "Linear regression is the next step up after correlation. It is used when we want to predict the value of a variable based on the value of another variable. The variable we want to predict is called the dependent variable (or sometimes, the outcome variable)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive)."}, {"text": "A basic tool for econometrics is the multiple linear regression model. In modern econometrics, other statistical tools are frequently used, but linear regression is still the most frequently used starting point for an analysis. Estimating a linear regression on two variables can be visualised as fitting a line through data points representing paired values of the independent and dependent variables."}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}]}, {"question": "What did bandits do", "positive_ctxs": [{"text": "A person who engages in banditry is known as a bandit and primarily commits crimes such as extortion, robbery, and murder, either as an individual or in groups. Banditry is a vague concept of criminality and in modern usage can be synonymous for gangsterism, brigandage, marauding, and thievery."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The rebellion took the Ming almost two years to crush.Similarly, small groups of local bandits could also end up joining larger groups of rebels. Robinson points out that bandits obviously perceived the benefits of supporting rebel cause but they also could be repelled to join; as a result, the 1510s rebels attracted a lot of local bandits and outlaws as they moved from one place to another."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}, {"text": "But what about 12 hits, or 17 hits? What is the critical number, c, of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c?"}]}, {"question": "Which is better K means or hierarchical clustering", "positive_ctxs": [{"text": "Difference between K Means and Hierarchical clustering Hierarchical clustering can't handle big data well but K Means clustering can. This is because the time complexity of K Means is linear i.e. O(n) while that of hierarchical clustering is quadratic i.e. O(n2)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "Which treatment is considered better is determined by an inequality between two ratios (successes/total). The reversal of the inequality between the ratios, which creates Simpson's paradox, happens because two effects occur together:"}]}, {"question": "If correlation does not imply causation what does it do", "positive_ctxs": [{"text": "In statistics, the phrase \"correlation does not imply causation\" refers to the inability to legitimately deduce a cause-and-effect relationship between two variables solely on the basis of an observed association or correlation between them."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "However, in general, the presence of a correlation is not sufficient to infer the presence of a causal relationship (i.e., correlation does not imply causation)."}, {"text": "However, in general, the presence of a correlation is not sufficient to infer the presence of a causal relationship (i.e., correlation does not imply causation)."}, {"text": "The classical measure of dependence, the Pearson correlation coefficient, is mainly sensitive to a linear relationship between two variables. Distance correlation was introduced in 2005 by G\u00e1bor J. Sz\u00e9kely in several lectures to address this deficiency of Pearson's correlation, namely that it can easily be zero for dependent variables. Correlation = 0 (uncorrelatedness) does not imply independence while distance correlation = 0 does imply independence."}, {"text": "In psychology practically all null hypotheses are claimed to be false for sufficiently large samples so \"...it is usually nonsensical to perform an experiment with the sole aim of rejecting the null hypothesis.\". \"Statistically significant findings are often misleading\" in psychology. Statistical significance does not imply practical significance and correlation does not imply causation."}, {"text": "In psychology practically all null hypotheses are claimed to be false for sufficiently large samples so \"...it is usually nonsensical to perform an experiment with the sole aim of rejecting the null hypothesis.\". \"Statistically significant findings are often misleading\" in psychology. Statistical significance does not imply practical significance and correlation does not imply causation."}, {"text": "In psychology practically all null hypotheses are claimed to be false for sufficiently large samples so \"...it is usually nonsensical to perform an experiment with the sole aim of rejecting the null hypothesis.\". \"Statistically significant findings are often misleading\" in psychology. Statistical significance does not imply practical significance and correlation does not imply causation."}, {"text": "In psychology practically all null hypotheses are claimed to be false for sufficiently large samples so \"...it is usually nonsensical to perform an experiment with the sole aim of rejecting the null hypothesis.\". \"Statistically significant findings are often misleading\" in psychology. Statistical significance does not imply practical significance and correlation does not imply causation."}]}, {"question": "How do you measure validity in statistics", "positive_ctxs": [{"text": "Tests of Correlation: The validity of a test is measured by the strength of association, or correlation, between the results obtained by the test and by the criterion measure."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "On first glance, internal and external validity seem to contradict each other \u2013 to get an experimental design you have to control for all interfering variables. That is why you often conduct your experiment in a laboratory setting. While gaining internal validity (excluding interfering variables by keeping them constant) you lose ecological or external validity because you establish an artificial laboratory setting."}, {"text": "It is a common practice to use a one-tailed hypothesis by default. However, \"If you do not have a specific direction firmly in mind in advance, use a two-sided alternative. Moreover, some users of statistics argue that we should always work with the two-sided alternative."}, {"text": "On the other hand, with observational research you can not control for interfering variables (low internal validity) but you can measure in the natural (ecological) environment, at the place where behavior normally occurs. However, in doing so, you sacrifice internal validity."}, {"text": "How we measure the response affects what inferences we draw. Suppose that we measure changes in blood pressure as a percentage change rather than in absolute values. Then, depending in the exact numbers, the average causal effect might be an increase in blood pressure."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}]}, {"question": "What does gradient mean in Machine Learning", "positive_ctxs": [{"text": "Gradient descent is an optimization algorithm used to minimize some function by iteratively moving in the direction of steepest descent as defined by the negative of the gradient. In machine learning, we use gradient descent to update the parameters of our model."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In February 2017, IBM announced the first Machine Learning Hub in Silicon Valley to share expertise and teach companies about machine learning and data science In April 2017 they expanded to Toronto, Beijing, and Stuttgart. A fifth Machine Learning Hub was created in August 2017 in India, Bongalore."}, {"text": "Bifet, Albert; Gavald\u00e0, Ricard; Holmes, Geoff; Pfahringer, Bernhard (2018). Machine Learning for Data Streams with Practical Examples in MOA. Adaptive Computation and Machine Learning."}, {"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}, {"text": "A main criticism concerns the lack of theory surrounding some methods. Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear."}, {"text": "A main criticism concerns the lack of theory surrounding some methods. Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear."}, {"text": "A main criticism concerns the lack of theory surrounding some methods. Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear."}, {"text": "A main criticism concerns the lack of theory surrounding some methods. Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear."}]}, {"question": "How do you handle a categorical variable with many levels", "positive_ctxs": [{"text": "To deal with categorical variables that have more than two levels, the solution is one-hot encoding. This takes every level of the category (e.g., Dutch, German, Belgian, and other), and turns it into a variable with two levels (yes/no)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "For each unique value in the original categorical column, a new column is created in this method. These dummy variables are then filled up with zeros and ones (1 meaning TRUE, 0 meaning FALSE).Because this process creates multiple new variables, it is prone to creating a big p problem (too many predictors) if there are many unique values in the original column. Another downside of one-hot encoding is that it causes multicollinearity between the individual variables, which potentially reduces the model's accuracy.Also, if the categorical variable is an output variable, you may want to convert the values back into a categorical form in order to present them in your application.In practical usage this transformation is often directly performed by a function that takes categorical data as an input and outputs the corresponding dummy variables."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "How many of each control and noise factors should be taken into account?The independent variable of a study often has many levels or different groups. In a true experiment, researchers can have an experimental group, which is where their intervention testing the hypothesis is implemented, and a control group, which has all the same element as the experimental group, without the interventional element. Thus, when everything else except for one intervention is held constant, researchers can certify with some certainty that this one element is what caused the observed change."}, {"text": "An interaction may arise when considering the relationship among three or more variables, and describes a situation in which the simultaneous influence of two variables on a third is not additive. Interactions may arise with categorical variables in two ways: either categorical by categorical variable interactions, or categorical by continuous variable interactions."}]}, {"question": "What is Fourier transform of an image", "positive_ctxs": [{"text": "Brief Description. The Fourier Transform is an important image processing tool which is used to decompose an image into its sine and cosine components. The output of the transformation represents the image in the Fourier or frequency domain, while the input image is the spatial domain equivalent."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This is G, since the Fourier transform of this integral is easy. Each fixed \u03c4 contribution is a Gaussian in x, whose Fourier transform is another Gaussian of reciprocal width in k."}, {"text": "Let X(f) be the Fourier transform of any function, x(t), whose samples at some interval, T, equal the x[n] sequence. Then the discrete-time Fourier transform (DTFT) is a Fourier series representation of a periodic summation of X(f):"}, {"text": "Discrete Fourier transform (general).The use of all of these transforms is greatly facilitated by the existence of efficient algorithms based on a fast Fourier transform (FFT). The Nyquist\u2013Shannon sampling theorem is critical for understanding the output of such discrete transforms."}, {"text": "In this case the Fourier series is finite and its value is equal to the sampled values at all points. The set of coefficients is known as the discrete Fourier transform (DFT) of the given sample sequence. The DFT is one of the key tools of digital signal processing, a field whose applications include radar, speech encoding, image compression."}, {"text": "The JPEG image format is an application of the closely related discrete cosine transform.The fast Fourier transform is an algorithm for rapidly computing the discrete Fourier transform. It is used not only for calculating the Fourier coefficients but, using the convolution theorem, also for computing the convolution of two finite sequences. They in turn are applied in digital filters and as a rapid multiplication algorithm for polynomials and large integers (Sch\u00f6nhage\u2013Strassen algorithm)."}, {"text": "Discrete-time Fourier transform (DTFT): Equivalent to the Fourier transform of a \"continuous\" function that is constructed from the discrete input function by using the sample values to modulate a Dirac comb. When the sample values are derived by sampling a function on the real line, \u0192(x), the DTFT is equivalent to a periodic summation of the Fourier transform of \u0192. The DTFT output is always periodic (cyclic)."}, {"text": "is an arbitrary cutoff frequency (a.k.a. The impulse response of such a filter is given by the inverse Fourier transform of the frequency response:"}]}, {"question": "When can Bayes theorem be used", "positive_ctxs": [{"text": "The Bayes theorem describes the probability of an event based on the prior knowledge of the conditions that might be related to the event. If we know the conditional probability , we can use the bayes rule to find out the reverse probabilities ."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "When actual economic data is non-convex, it can be made convex by taking convex hulls. The Shapley\u2013Folkman theorem can be used to show that, for large markets, this approximation is accurate, and leads to a \"quasi-equilibrium\" for the original non-convex market."}, {"text": "In the statistics and computer science literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but na\u00efve Bayes is not (necessarily) a Bayesian method."}, {"text": "In the statistics and computer science literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but na\u00efve Bayes is not (necessarily) a Bayesian method."}, {"text": "In the statistics and computer science literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but na\u00efve Bayes is not (necessarily) a Bayesian method."}, {"text": "In the statistics and computer science literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but na\u00efve Bayes is not (necessarily) a Bayesian method."}, {"text": "It will be shown in the next section that the normalizing constant for Jeffreys prior is immaterial to the final result because the normalizing constant cancels out in Bayes theorem for the posterior probability. Hence Beta(1/2,1/2) is used as the Jeffreys prior for both Bernoulli and binomial distributions. As shown in the next section, when using this expression as a prior probability times the likelihood in Bayes theorem, the posterior probability turns out to be a beta distribution."}, {"text": "In statistics, naive Bayes classifiers are a family of simple \"probabilistic classifiers\" based on applying Bayes' theorem with strong (na\u00efve) independence assumptions between the features. They are among the simplest Bayesian network models, but coupled with kernel density estimation, they can achieve higher accuracy levels.Na\u00efve Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features/predictors) in a learning problem. Maximum-likelihood training can be done by evaluating a closed-form expression, which takes linear time, rather than by expensive iterative approximation as used for many other types of classifiers."}]}, {"question": "What is the difference between a Type I error and a Type II error", "positive_ctxs": [{"text": "Type 1 error, in statistical hypothesis testing, is the error caused by rejecting a null hypothesis when it is true. Type II error is the error that occurs when the null hypothesis is accepted when it is not true."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The crossover error rate (CER) is the point at which Type I errors and Type II errors are equal and represents the best way of measuring a biometrics' effectiveness. A system with a lower CER value provides more accuracy than a system with a higher CER value."}, {"text": "The crossover error rate (CER) is the point at which Type I errors and Type II errors are equal and represents the best way of measuring a biometrics' effectiveness. A system with a lower CER value provides more accuracy than a system with a higher CER value."}, {"text": "The crossover error rate (CER) is the point at which Type I errors and Type II errors are equal and represents the best way of measuring a biometrics' effectiveness. A system with a lower CER value provides more accuracy than a system with a higher CER value."}, {"text": "The crossover error rate (CER) is the point at which Type I errors and Type II errors are equal and represents the best way of measuring a biometrics' effectiveness. A system with a lower CER value provides more accuracy than a system with a higher CER value."}, {"text": "The crossover error rate (CER) is the point at which Type I errors and Type II errors are equal and represents the best way of measuring a biometrics' effectiveness. A system with a lower CER value provides more accuracy than a system with a higher CER value."}, {"text": "For a fixed level of Type I error rate, use of these statistics minimizes Type II error rates (equivalent to maximizing power). The following terms describe tests in terms of such optimality:"}, {"text": "For a fixed level of Type I error rate, use of these statistics minimizes Type II error rates (equivalent to maximizing power). The following terms describe tests in terms of such optimality:"}]}, {"question": "How do you find the distance of a clustered Matrix", "positive_ctxs": [{"text": "Distance MatrixThe proximity between object can be measured as distance matrix. For example, distance between object A = (1, 1) and B = (1.5, 1.5) is computed as.Another example of distance between object D = (3, 4) and F = (3, 3.5) is calculated as.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Campbell and Fiske (1959) developed the Multitrait-Multimethod Matrix to assess the construct validity of a set of measures in a study. The approach stresses the importance of using both discriminant and convergent validation techniques when assessing new tests. In other words, in order to establish construct validity, you have to demonstrate both convergence and discrimination."}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a set of data that contains information on medical patients your goal is to find correlation for a disease. Before you can start iterating through the data ensure that you have an understanding of the result, are you looking for patients who have the disease? Are there other diseases that can be the cause?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}]}, {"question": "How do you find the moment in statistics", "positive_ctxs": [{"text": "What Are Moments in Statistics?Moments About the MeanFirst, calculate the mean of the values.Next, subtract this mean from each value.Then raise each of these differences to the sth power.Now add the numbers from step #3 together.Finally, divide this sum by the number of values we started with."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}, {"text": "It is a common practice to use a one-tailed hypothesis by default. However, \"If you do not have a specific direction firmly in mind in advance, use a two-sided alternative. Moreover, some users of statistics argue that we should always work with the two-sided alternative."}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}]}, {"question": "What is additivity in statistics", "positive_ctxs": [{"text": "Additivity is a property pertaining to a set of interdependent index numbers related by definition or by accounting constraints under which an aggregate is defined as the sum of its components; additivity requires this identity to be preserved when the values of both an aggregate and its components in some reference"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The assumption of unit treatment additivity usually cannot be directly falsified, according to Cox and Kempthorne. However, many consequences of treatment-unit additivity can be falsified. For a randomized experiment, the assumption of unit-treatment additivity implies that the variance is constant for all treatments."}]}, {"question": "What are the four moments of statistics", "positive_ctxs": [{"text": "The first four are: 1) The mean, which indicates the central tendency of a distribution. 2) The second moment is the variance, which indicates the width or deviation. 3) The third moment is the skewness, which indicates any asymmetric 'leaning' to either left or right."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Consider the ordered list {1,2,3,4} which contains four data values. What is the 75th percentile of this list using the Microsoft Excel method?"}, {"text": "In general, dimensionless quantities are scale invariant. The analogous concept in statistics are standardized moments, which are scale invariant statistics of a variable, while the unstandardized moments are not."}, {"text": "of a beta distribution supported in the [a, c] interval -see section \"Alternative parametrizations, Four parameters\"-) can be estimated, using the method of moments developed by Karl Pearson, by equating sample and population values of the first four central moments (mean, variance, skewness and excess kurtosis). The excess kurtosis was expressed in terms of the square of the skewness, and the sample size \u03bd = \u03b1 + \u03b2, (see previous section \"Kurtosis\") as follows:"}, {"text": "The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X \u2212 E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions."}, {"text": "The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X \u2212 E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions."}, {"text": "The method of moments was introduced by Pafnuty Chebyshev in 1887 in the proof of the central limit theorem. The idea of matching empirical moments of a distribution to the population moments dates back at least to Pearson."}, {"text": "All the odd moments are zero, by \u00b1 symmetry. The even moments are the sum over all partition into pairs of the product of G(x \u2212 y) for each pair."}]}, {"question": "Why do l get NaN values when l train my neural network with a rectified linear unit", "positive_ctxs": [{"text": "You probably have a numerical stability issue. This may happen due to zero division or any operation that is making a number(s) extremely big."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The rectifier is, as of 2017, the most popular activation function for deep neural networks.A unit employing the rectifier is also called a rectified linear unit (ReLU).Rectified linear units find applications in computer vision and speech recognition using deep neural nets and computational neuroscience."}, {"text": "The rectifier is, as of 2017, the most popular activation function for deep neural networks.A unit employing the rectifier is also called a rectified linear unit (ReLU).Rectified linear units find applications in computer vision and speech recognition using deep neural nets and computational neuroscience."}, {"text": "This is a type of k*l-fold cross-validation when l = k - 1. A single k-fold cross-validation is used with both a validation and test set. The total data set is split into k sets."}, {"text": "This is a type of k*l-fold cross-validation when l = k - 1. A single k-fold cross-validation is used with both a validation and test set. The total data set is split into k sets."}, {"text": "This is a type of k*l-fold cross-validation when l = k - 1. A single k-fold cross-validation is used with both a validation and test set. The total data set is split into k sets."}, {"text": "This is repeated for each of the k sets. Each outer training set is further sub-divided into l sets. One by one, a set is selected as inner test (validation) set and the l - 1 other sets are combined into the corresponding inner training set."}, {"text": "This is repeated for each of the k sets. Each outer training set is further sub-divided into l sets. One by one, a set is selected as inner test (validation) set and the l - 1 other sets are combined into the corresponding inner training set."}]}, {"question": "What is the significance of the beta distribution What are some common applications", "positive_ctxs": [{"text": "The beta distribution of the first kind, usually written in terms of the incom- plete beta function, can be used to model the distribution of measurements whose values all lie between zero and one. It can also be used to model the distribution for the probability of occurrence of some discrete event."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest and most common ones. Weak correlations between successive samples are also often desirable/necessary."}, {"text": "What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest and most common ones. Weak correlations between successive samples are also often desirable/necessary."}, {"text": "For some of the above problems, it may also be interesting to ask about statistical significance. What is the probability that a sequence drawn from some null distribution will have an HMM probability (in the case of the forward algorithm) or a maximum state sequence probability (in the case of the Viterbi algorithm) at least as large as that of a particular output sequence? When an HMM is used to evaluate the relevance of a hypothesis for a particular output sequence, the statistical significance indicates the false positive rate associated with failing to reject the hypothesis for the output sequence."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "The (continuous case) differential entropy was introduced by Shannon in his original paper (where he named it the \"entropy of a continuous distribution\"), as the concluding part of the same paper where he defined the discrete entropy. It is known since then that the differential entropy may differ from the infinitesimal limit of the discrete entropy by an infinite offset, therefore the differential entropy can be negative (as it is for the beta distribution). What really matters is the relative value of entropy."}, {"text": "What is the period of oscillation T of a mass m attached to an ideal linear spring with spring constant k suspended in gravity of strength g? That period is the solution for T of some dimensionless equation in the variables T, m, k, and g."}]}, {"question": "What random events mean", "positive_ctxs": [{"text": "Random event/process/variable: an event/process that is not and cannot be made exact and, consequently, whose outcome cannot be predicted, e.g., the sum of the numbers on two rolled dice."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In essence, misapplication of regression toward the mean can reduce all events to a just-so story, without cause or effect. (Such misapplication takes as a premise that all events are random, as they must be for the concept of regression toward the mean to be validly applied.)"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "For a random sample of N observations on the jth random variable, the sample mean's distribution itself has mean equal to the population mean"}, {"text": "When dealing with collections of more than two events, a weak and a strong notion of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while saying that the events are mutually independent (or collectively independent) intuitively means that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables."}, {"text": "When dealing with collections of more than two events, a weak and a strong notion of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while saying that the events are mutually independent (or collectively independent) intuitively means that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables."}, {"text": "When dealing with collections of more than two events, a weak and a strong notion of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while saying that the events are mutually independent (or collectively independent) intuitively means that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables."}, {"text": "Even though events are subsets of some sample space \u03a9, they are often written as predicates or indicators involving random variables. For example, if X is a real-valued random variable defined on the sample space \u03a9, the event"}]}, {"question": "What is the definition of metric system", "positive_ctxs": [{"text": "metric system. A system of measurement in which the basic units are the meter, the second, and the kilogram. In this system, the ratios between units of measurement are multiples of ten. For example, a kilogram is a thousand grams, and a centimeter is one-hundredth of a meter."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "The metric also allows for a definition of limits and completeness - a metric space that is complete is known as a Banach space. A complete metric space along with the additional structure of an inner product (a conjugate symmetric sesquilinear form) is known as a Hilbert space, which is in some sense a particularly well-behaved Banach space. Functional analysis applies the methods of linear algebra alongside those of mathematical analysis to study various function spaces; the central objects of study in functional analysis are Lp spaces, which are Banach spaces, and especially the L2 space of square integrable functions, which is the only Hilbert space among them."}, {"text": "In a de minimis definition, severity of failures includes the cost of spare parts, man-hours, logistics, damage (secondary failures), and downtime of machines which may cause production loss. A more complete definition of failure also can mean injury, dismemberment, and death of people within the system (witness mine accidents, industrial accidents, space shuttle failures) and the same to innocent bystanders (witness the citizenry of cities like Bhopal, Love Canal, Chernobyl, or Sendai, and other victims of the 2011 T\u014dhoku earthquake and tsunami)\u2014in this case, reliability engineering becomes system safety. What is acceptable is determined by the managing authority or customers or the affected communities."}, {"text": "These two notions of completeness ignore the field structure. However, an ordered group (in this case, the additive group of the field) defines a uniform structure, and uniform structures have a notion of completeness; the description in \u00a7 Completeness is a special case. (We refer to the notion of completeness in uniform spaces rather than the related and better known notion for metric spaces, since the definition of metric space relies on already having a characterization of the real numbers.)"}, {"text": "satisfies the properties of a metric (triangle inequality, non-negativity, indiscernability and symmetry). This distance metric is also known as the variation of information."}, {"text": ", the metric is the minimum \"cost\" of turning one pile into the other, which is assumed to be the amount of earth that needs to be moved times the mean distance it has to be moved. Because of this analogy, the metric is known in computer science as the earth mover's distance."}, {"text": "Thermodynamic equilibrium for an open system means that, with respect to every relevant kind of selectively permeable wall, contact equilibrium exists when the respective intensive parameters of the system and surroundings are equal. This definition does not consider the most general kind of thermodynamic equilibrium, which is through unselective contacts. This definition does not simply state that no current of matter or energy exists in the interior or at the boundaries; but it is compatible with the following definition, which does so state."}]}, {"question": "Are there ever any circumstances when a correlation can be interpreted as evidence for a causal connection between two variables", "positive_ctxs": [{"text": "A correlation between two variables does not imply causation. On the other hand, if there is a causal relationship between two variables, they must be correlated. Example: A study shows that there is a negative correlation between a student's anxiety before a test and the student's score on the test."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A causal inference draws a conclusion about a causal connection based on the conditions of the occurrence of an effect. Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship."}, {"text": "In other words, a correlation can be taken as evidence for a possible causal relationship, but cannot indicate what the causal relationship, if any, might be."}, {"text": "In other words, a correlation can be taken as evidence for a possible causal relationship, but cannot indicate what the causal relationship, if any, might be."}, {"text": "In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. In the broadest sense correlation is any statistical association, though it commonly refers to the degree to which a pair of variables are linearly related."}, {"text": "In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. In the broadest sense correlation is any statistical association, though it commonly refers to the degree to which a pair of variables are linearly related."}, {"text": "Intuitively, the Spearman correlation between two variables will be high when observations have a similar (or identical for a correlation of 1) rank (i.e. relative position label of the observations within the variable: 1st, 2nd, 3rd, etc.) between the two variables, and low when observations have a dissimilar (or fully opposed for a correlation of \u22121) rank between the two variables."}, {"text": "The Spearman correlation between two variables is equal to the Pearson correlation between the rank values of those two variables; while Pearson's correlation assesses linear relationships, Spearman's correlation assesses monotonic relationships (whether linear or not). If there are no repeated data values, a perfect Spearman correlation of +1 or \u22121 occurs when each of the variables is a perfect monotone function of the other."}]}, {"question": "What does the negative in negative binomial distribution signify", "positive_ctxs": [{"text": "A negative binomial random variable is the number X of repeated trials to produce r successes in a negative binomial experiment. The probability distribution of a negative binomial random variable is called a negative binomial distribution. Suppose we flip a coin repeatedly and count the number of heads (successes)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Because of this, the negative binomial distribution is also known as the gamma\u2013Poisson (mixture) distribution. The negative binomial distribution was originally derived as a limiting case of the gamma-Poisson distribution."}, {"text": "Because of this, the negative binomial distribution is also known as the gamma\u2013Poisson (mixture) distribution. The negative binomial distribution was originally derived as a limiting case of the gamma-Poisson distribution."}, {"text": "In other words, the alternatively parameterized negative binomial distribution converges to the Poisson distribution and r controls the deviation from the Poisson. This makes the negative binomial distribution suitable as a robust alternative to the Poisson, which approaches the Poisson for large r, but which has larger variance than the Poisson for small r."}, {"text": "In other words, the alternatively parameterized negative binomial distribution converges to the Poisson distribution and r controls the deviation from the Poisson. This makes the negative binomial distribution suitable as a robust alternative to the Poisson, which approaches the Poisson for large r, but which has larger variance than the Poisson for small r."}, {"text": "Each of these definitions of the negative binomial distribution can be expressed in slightly different but equivalent ways. The first alternative formulation is simply an equivalent form of the binomial coefficient, that is:"}, {"text": "Each of these definitions of the negative binomial distribution can be expressed in slightly different but equivalent ways. The first alternative formulation is simply an equivalent form of the binomial coefficient, that is:"}, {"text": "Well-known discrete probability distributions used in statistical modeling include the Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, and the negative binomial distribution. Additionally, the discrete uniform distribution is commonly used in computer programs that make equal-probability random selections between a number of choices."}]}, {"question": "What is the standard error of the mean difference", "positive_ctxs": [{"text": "The standard deviation of this set of mean values is the standard error. In lieu of taking many samples one can estimate the standard error from a single sample. This estimate is derived by dividing the standard deviation by the square root of the sample size."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}]}, {"question": "What is autoregression time series", "positive_ctxs": [{"text": "An autoregressive model is when a value from a time series is regressed on previous values from that same time series. The order of an autoregression is the number of immediately preceding values in the series that are used to predict the value at the present time."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Bayesian structural time series (BSTS) model is a statistical technique used for feature selection, time series forecasting, nowcasting, inferring causal impact and other applications. The model is designed to work with time series data."}, {"text": "In regression analysis using time series data, autocorrelation in a variable of interest is typically modeled either with an autoregressive model (AR), a moving average model (MA), their combination as an autoregressive-moving-average model (ARMA), or an extension of the latter called an autoregressive integrated moving average model (ARIMA). With multiple interrelated data series, vector autoregression (VAR) or its extensions are used."}, {"text": "In regression analysis using time series data, autocorrelation in a variable of interest is typically modeled either with an autoregressive model (AR), a moving average model (MA), their combination as an autoregressive-moving-average model (ARMA), or an extension of the latter called an autoregressive integrated moving average model (ARIMA). With multiple interrelated data series, vector autoregression (VAR) or its extensions are used."}, {"text": "A time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data."}, {"text": "A time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data."}, {"text": "A time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data."}, {"text": "A time series is one type of panel data. Panel data is the general class, a multidimensional data set, whereas a time series data set is a one-dimensional panel (as is a cross-sectional dataset). A data set may exhibit characteristics of both panel data and time series data."}]}, {"question": "What are the units of a probability density function", "positive_ctxs": [{"text": "Technically, the probability density of variable X , means the probability per unit increment of X . The units of probability density are the reciprocal of the units of X \u2014 if the units of X are dollars, the units of probability density are probability per dollar increment."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For continuous random variables X1, ..., Xn, it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the n variables, such that, for any domain D in the n-dimensional space of the values of the variables X1, ..., Xn, the probability that a realisation of the set variables falls inside the domain D is"}, {"text": "For continuous random variables X1, ..., Xn, it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the n variables, such that, for any domain D in the n-dimensional space of the values of the variables X1, ..., Xn, the probability that a realisation of the set variables falls inside the domain D is"}, {"text": "For continuous random variables X1, ..., Xn, it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the n variables, such that, for any domain D in the n-dimensional space of the values of the variables X1, ..., Xn, the probability that a realisation of the set variables falls inside the domain D is"}, {"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}, {"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}, {"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}, {"text": "If f is a probability density function, then the value of the integral above is called the n-th moment of the probability distribution. More generally, if F is a cumulative probability distribution function of any probability distribution, which may not have a density function, then the n-th moment of the probability distribution is given by the Riemann\u2013Stieltjes integral"}]}, {"question": "What is a running median", "positive_ctxs": [{"text": "The term \"running median\" is typically used to refer to the median of a subset of data."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Provided that the probability distribution of X is such that the above expectation exists, then m is a median of X if and only if m is a minimizer of the mean absolute error with respect to X. In particular, m is a sample median if and only if m minimizes the arithmetic mean of the absolute deviations.More generally, a median is defined as a minimum of"}, {"text": "Provided that the probability distribution of X is such that the above expectation exists, then m is a median of X if and only if m is a minimizer of the mean absolute error with respect to X. In particular, m is a sample median if and only if m minimizes the arithmetic mean of the absolute deviations.More generally, a median is defined as a minimum of"}, {"text": "For one-dimensional problems, a unique median exists for practical continuous problems. The posterior median is attractive as a robust estimator.If there exists a finite mean for the posterior distribution, then the posterior mean is a method of estimation."}, {"text": "Although the worst-case running time is \u0398(n2), the average-case running time is \u0398(nlogn). It turns out that the worst-case does not happen often. For large value of n, the running time is \u0398(nlogn) with a high probability."}, {"text": "A simple one is the median of three rule, which estimates the median as the median of a three-element subsample; this is commonly used as a subroutine in the quicksort sorting algorithm, which uses an estimate of its input's median. A more robust estimator is Tukey's ninther, which is the median of three rule applied with limited recursion: if A is the sample laid out as an array, and"}, {"text": "The median absolute deviation (also MAD) is the median of the absolute deviation from the median. It is a robust estimator of dispersion."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}]}, {"question": "How do you calculate the margin of error", "positive_ctxs": [{"text": "How to calculate margin of errorGet the population standard deviation (\u03c3) and sample size (n).Take the square root of your sample size and divide it into your population standard deviation.Multiply the result by the z-score consistent with your desired confidence interval according to the following table:"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The margin of error is a statistic expressing the amount of random sampling error in the results of a survey. The larger the margin of error, the less confidence one should have that a poll result would reflect the result of a survey of the entire population. The margin of error will be positive whenever a population is incompletely sampled and the outcome measure has positive variance, which is to say, the measure varies."}, {"text": "One example is the percent of people who prefer product A versus product B. When a single, global margin of error is reported for a survey, it refers to the maximum margin of error for all reported percentages using the full sample from the survey. If the statistic is a percentage, this maximum margin of error can be calculated as the radius of the confidence interval for a reported percentage of 50%."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "Such an interval is called a confidence interval for the parameter \u03bc. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves."}, {"text": "The results focusing on that group will be much less reliable than results for the full population. If the margin of error for the full sample was 4%, say, then the margin of error for such a subgroup could be around 13%."}, {"text": "A 3% margin of error means that if the same procedure is used a large number of times, 95% of the time the true population average will be within the sample estimate plus or minus 3%. The margin of error can be reduced by using a larger sample, however if a pollster wishes to reduce the margin of error to 1% they would need a sample of around 10,000 people. In practice, pollsters need to balance the cost of a large sample against the reduction in sampling error and a sample size of around 500\u20131,000 is a typical compromise for political polls."}]}, {"question": "What is spatio temporal model", "positive_ctxs": [{"text": "Spatiotemporal models arise when data are collected across time as well as space and has at least one spatial and one temporal property. An event in a spatiotemporal dataset describes a spatial and temporal phenomenon that exists at a certain time t and location x."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "Some attempts have been made to model these temporal logics using both computational formalisms such as the Event Calculus and temporal logics such as defeasible temporal logic.In any consideration of the use of logic to model law it needs to be borne in mind that law is inherently non-monotonic, as is shown by the rights of appeal enshrined in all legal systems, and the way in which interpretations of the law change over time. Moreover, in the drafting of law exceptions abound, and, in the application of law, precedents are overturned as well as followed. In logic programming approaches, negation as failure is often used to handle non-monotonicity, but specific non-monotonic logics such as defeasible logic have also been used."}, {"text": "An important way to model check is to express desired properties (such as the ones described above) using LTL operators and actually check if the model satisfies this property. One technique is to obtain a B\u00fcchi automaton that is equivalent to the model (accepts an \u03c9-word precisely if it is the model) and another one that is equivalent to the negation of the property (accepts an \u03c9-word precisely it satisfies the negated property) (cf. Linear temporal logic to B\u00fcchi automaton)."}]}, {"question": "What is vanishing gradient problem in neural networks", "positive_ctxs": [{"text": "In machine learning, the vanishing gradient problem is encountered when training artificial neural networks with gradient-based learning methods and backpropagation. The problem is that in some cases, the gradient will be vanishingly small, effectively preventing the weight from changing its value."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In machine learning, the vanishing gradient problem is encountered when training artificial neural networks with gradient-based learning methods and backpropagation. In such methods, each of the neural network's weights receives an update proportional to the partial derivative of the error function with respect to the current weight in each iteration of training. The problem is that in some cases, the gradient will be vanishingly small, effectively preventing the weight from changing its value."}, {"text": "In machine learning, the vanishing gradient problem is encountered when training artificial neural networks with gradient-based learning methods and backpropagation. In such methods, each of the neural network's weights receives an update proportional to the partial derivative of the error function with respect to the current weight in each iteration of training. The problem is that in some cases, the gradient will be vanishingly small, effectively preventing the weight from changing its value."}, {"text": "Hardware advances have meant that from 1991 to 2015, computer power (especially as delivered by GPUs) has increased around a million-fold, making standard backpropagation feasible for networks several layers deeper than when the vanishing gradient problem was recognized. Schmidhuber notes that this \"is basically what is winning many of the image recognition competitions now\", but that it \"does not really overcome the problem in a fundamental way\" since the original models tackling the vanishing gradient problem by Hinton and others were trained in a Xeon processor, not GPUs."}, {"text": "Hardware advances have meant that from 1991 to 2015, computer power (especially as delivered by GPUs) has increased around a million-fold, making standard backpropagation feasible for networks several layers deeper than when the vanishing gradient problem was recognized. Schmidhuber notes that this \"is basically what is winning many of the image recognition competitions now\", but that it \"does not really overcome the problem in a fundamental way\" since the original models tackling the vanishing gradient problem by Hinton and others were trained in a Xeon processor, not GPUs."}, {"text": "One of the newest and most effective ways to resolve the vanishing gradient problem is with residual neural networks, or ResNets (not to be confused with recurrent neural networks). ResNets refer to neural networks where skip connections or residual connections are part of the network architecture. These skip connections allow gradient information to pass through the layers, by creating \"highways\" of information, where the output of a previous layer/activation is added to the output of a deeper layer."}, {"text": "One of the newest and most effective ways to resolve the vanishing gradient problem is with residual neural networks, or ResNets (not to be confused with recurrent neural networks). ResNets refer to neural networks where skip connections or residual connections are part of the network architecture. These skip connections allow gradient information to pass through the layers, by creating \"highways\" of information, where the output of a previous layer/activation is added to the output of a deeper layer."}, {"text": "However, deep learning has problems of its own. A common problem for recurrent neural networks is the vanishing gradient problem, which is where gradients passed between layers gradually shrink and literally disappear as they are rounded off to zero. There have been many methods developed to approach this problem, such as Long short-term memory units."}]}, {"question": "How do you plot a box plot", "positive_ctxs": [{"text": "In a box plot, we draw a box from the first quartile to the third quartile. A vertical line goes through the box at the median. The whiskers go from each quartile to the minimum or maximum."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Complex online box plot creator with example data - see also BoxPlotR: a web tool for generation of box plots Spitzer et al. Nature Methods 11, 121\u2013122 (2014)"}, {"text": "Graphs that are appropriate for bivariate analysis depend on the type of variable. For two continuous variables, a scatterplot is a common graph. When one variable is categorical and the other continuous, a box plot is common and when both are categorical a mosaic plot is common."}, {"text": "Graphs that are appropriate for bivariate analysis depend on the type of variable. For two continuous variables, a scatterplot is a common graph. When one variable is categorical and the other continuous, a box plot is common and when both are categorical a mosaic plot is common."}, {"text": "In descriptive statistics, a box plot or boxplot is a method for graphically depicting groups of numerical data through their quartiles. Box plots may also have lines extending from the boxes (whiskers) indicating variability outside the upper and lower quartiles, hence the terms box-and-whisker plot and box-and-whisker diagram. Outliers may be plotted as individual points."}, {"text": "Since the mathematician John W. Tukey popularized this type of visual data display in 1969, several variations on the traditional box plot have been described. Two of the most common are variable width box plots and notched box plots (see Figure 4)."}, {"text": "The box plot allows quick graphical examination of one or more data sets. Box plots may seem more primitive than a histogram or kernel density estimate but they do have some advantages. They take up less space and are therefore particularly useful for comparing distributions between several groups or sets of data (see Figure 1 for an example)."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}]}, {"question": "What is inference in Bayesian networks", "positive_ctxs": [{"text": "Inference over a Bayesian network can come in two forms. The first is simply evaluating the joint probability of a particular assignment of values for each variable (or a subset) in the network. We would calculate P(\u00acx | e) in the same fashion, just setting the value of the variables in x to false instead of true."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data."}, {"text": "In 1990, while working at Stanford University on large bioinformatic applications, Cooper proved that exact inference in Bayesian networks is NP-hard. This result prompted research on approximation algorithms with the aim of developing a tractable approximation to probabilistic inference. In 1993, Dagum and Luby proved two surprising results on the complexity of approximation of probabilistic inference in Bayesian networks."}, {"text": "In 1990, while working at Stanford University on large bioinformatic applications, Cooper proved that exact inference in Bayesian networks is NP-hard. This result prompted research on approximation algorithms with the aim of developing a tractable approximation to probabilistic inference. In 1993, Dagum and Luby proved two surprising results on the complexity of approximation of probabilistic inference in Bayesian networks."}, {"text": "Efficient algorithms can perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (e.g. speech signals or protein sequences) are called dynamic Bayesian networks."}, {"text": "Efficient algorithms can perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (e.g. speech signals or protein sequences) are called dynamic Bayesian networks."}, {"text": "At about the same time, Roth proved that exact inference in Bayesian networks is in fact #P-complete (and thus as hard as counting the number of satisfying assignments of a conjunctive normal form formula (CNF) and that approximate inference within a factor 2n1\u2212\u025b for every \u025b > 0, even for Bayesian networks with restricted architecture, is NP-hard.In practical terms, these complexity results suggested that while Bayesian networks were rich representations for AI and machine learning applications, their use in large real-world applications would need to be tempered by either topological structural constraints, such as na\u00efve Bayes networks, or by restrictions on the conditional probabilities. The bounded variance algorithm was the first provable fast approximation algorithm to efficiently approximate probabilistic inference in Bayesian networks with guarantees on the error approximation. This powerful algorithm required the minor restriction on the conditional probabilities of the Bayesian network to be bounded away from zero and one by 1/p(n) where p(n) was any polynomial on the number of nodes in the network n."}, {"text": "At about the same time, Roth proved that exact inference in Bayesian networks is in fact #P-complete (and thus as hard as counting the number of satisfying assignments of a conjunctive normal form formula (CNF) and that approximate inference within a factor 2n1\u2212\u025b for every \u025b > 0, even for Bayesian networks with restricted architecture, is NP-hard.In practical terms, these complexity results suggested that while Bayesian networks were rich representations for AI and machine learning applications, their use in large real-world applications would need to be tempered by either topological structural constraints, such as na\u00efve Bayes networks, or by restrictions on the conditional probabilities. The bounded variance algorithm was the first provable fast approximation algorithm to efficiently approximate probabilistic inference in Bayesian networks with guarantees on the error approximation. This powerful algorithm required the minor restriction on the conditional probabilities of the Bayesian network to be bounded away from zero and one by 1/p(n) where p(n) was any polynomial on the number of nodes in the network n."}]}, {"question": "How is the decision tree useful", "positive_ctxs": [{"text": "Decision trees provide an effective method of Decision Making because they: Clearly lay out the problem so that all options can be challenged. Allow us to analyze fully the possible consequences of a decision. Provide a framework to quantify the values of outcomes and the probabilities of achieving them."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A decision tree or a classification tree is a tree in which each internal (non-leaf) node is labeled with an input feature. The arcs coming from a node labeled with an input feature are labeled with each of the possible values of the target feature or the arc leads to a subordinate decision node on a different input feature. Each leaf of the tree is labeled with a class or a probability distribution over the classes, signifying that the data set has been classified by the tree into either a specific class, or into a particular probability distribution (which, if the decision tree is well-constructed, is skewed towards certain subsets of classes)."}, {"text": "A decision tree or a classification tree is a tree in which each internal (non-leaf) node is labeled with an input feature. The arcs coming from a node labeled with an input feature are labeled with each of the possible values of the target feature or the arc leads to a subordinate decision node on a different input feature. Each leaf of the tree is labeled with a class or a probability distribution over the classes, signifying that the data set has been classified by the tree into either a specific class, or into a particular probability distribution (which, if the decision tree is well-constructed, is skewed towards certain subsets of classes)."}, {"text": "One of the questions that arises in a decision tree algorithm is the optimal size of the final tree. A tree that is too large risks overfitting the training data and poorly generalizing to new samples. A small tree might not capture important structural information about the sample space."}, {"text": "Rotation forest \u2013 in which every decision tree is trained by first applying principal component analysis (PCA) on a random subset of the input features.A special case of a decision tree is a decision list, which is a one-sided decision tree, so that every internal node has exactly 1 leaf node and exactly 1 internal node as a child (except for the bottommost node, whose only child is a single leaf node). While less expressive, decision lists are arguably easier to understand than general decision trees due to their added sparsity, permit non-greedy learning methods and monotonic constraints to be imposed.Notable decision tree algorithms include:"}, {"text": "Rotation forest \u2013 in which every decision tree is trained by first applying principal component analysis (PCA) on a random subset of the input features.A special case of a decision tree is a decision list, which is a one-sided decision tree, so that every internal node has exactly 1 leaf node and exactly 1 internal node as a child (except for the bottommost node, whose only child is a single leaf node). While less expressive, decision lists are arguably easier to understand than general decision trees due to their added sparsity, permit non-greedy learning methods and monotonic constraints to be imposed.Notable decision tree algorithms include:"}, {"text": "The game-tree complexity of a game is the number of leaf nodes in the smallest full-width decision tree that establishes the value of the initial position. A full-width tree includes all nodes at each depth."}, {"text": "It allows developers to confirm that the model has learned realistic information from the data and allows end-users to have trust and confidence in the decisions made by the model. For example, following the path that a decision tree takes to make its decision is quite trivial, but following the paths of 100's of trees is much harder. To achieve both performance and interpretability, some model compression techniques allow transforming a random forest into a minimal \"born-again\" decision tree that faithfully reproduces the same decision function."}]}, {"question": "How do you find the accuracy of a linear regression model", "positive_ctxs": [{"text": "There are several ways to check your Linear Regression model accuracy. Usually, you may use Root mean squared error. You may train several Linear Regression models, adding or removing features to your dataset, and see which one has the lowest RMSE - the best one in your case."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "In statistics, the generalized linear model (GLM) is a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value."}, {"text": "In statistics, the generalized linear model (GLM) is a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value."}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}, {"text": "Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is"}]}, {"question": "What are the types of factor analysis", "positive_ctxs": [{"text": "There are two types of factor analyses, exploratory and confirmatory. Exploratory factor analysis (EFA) is method to explore the underlying structure of a set of observed variables, and is a crucial step in the scale development process. The first step in EFA is factor extraction."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This equivalence is fully explained in a book by J\u00e9r\u00f4me Pag\u00e8s. It plays an important theoretical role because it opens the way to the simultaneous treatment of quantitative and qualitative variables. Two methods simultaneously analyze these two types of variables: factor analysis of mixed data and, when the active variables are partitioned in several groups: multiple factor analysis."}, {"text": "Canonical factor analysis seeks factors which have the highest canonical correlation with the observed variables. Canonical factor analysis is unaffected by arbitrary rescaling of the data."}, {"text": "Higher-order factor analysis is a statistical method consisting of repeating steps factor analysis \u2013 oblique rotation \u2013 factor analysis of rotated factors. Its merit is to enable the researcher to see the hierarchical structure of studied phenomena. To interpret the results, one proceeds either by post-multiplying the primary factor pattern matrix by the higher-order factor pattern matrices (Gorsuch, 1983) and perhaps applying a Varimax rotation to the result (Thompson, 1990) or by using a Schmid-Leiman solution (SLS, Schmid & Leiman, 1957, also known as Schmid-Leiman transformation) which attributes the variation from the primary factors to the second-order factors."}, {"text": "If the factor model is incorrectly formulated or the assumptions are not met, then factor analysis will give erroneous results. Factor analysis has been used successfully where adequate understanding of the system permits good initial model formulations. PCA employs a mathematical transformation to the original data with no assumptions about the form of the covariance matrix."}, {"text": "The observable data that go into factor analysis would be 10 scores of each of the 1000 students, a total of 10,000 numbers. The factor loadings and levels of the two kinds of intelligence of each student must be inferred from the data."}, {"text": "Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts?"}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}]}, {"question": "What are the advantages and disadvantages of decision tree", "positive_ctxs": [{"text": "Advantages and disadvantagesAre simple to understand and interpret. Have value even with little hard data. Help determine worst, best and expected values for different scenarios.Use a white box model. Can be combined with other decision techniques."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Since both advantages and disadvantages present on the two way of modeling, combining both approaches will be a good modeling in practice. For example, in Marras' article A Joint Discriminative Generative Model for Deformable Model Construction and Classification, he and his coauthors apply the combination of two modelings on face classification of the models, and receive a higher accuracy than the traditional approach."}, {"text": "The choice of numerator layout in the introductory sections below does not imply that this is the \"correct\" or \"superior\" choice. There are advantages and disadvantages to the various layout types. Serious mistakes can result from carelessly combining formulas written in different layouts, and converting from one layout to another requires care to avoid errors."}, {"text": "It allows developers to confirm that the model has learned realistic information from the data and allows end-users to have trust and confidence in the decisions made by the model. For example, following the path that a decision tree takes to make its decision is quite trivial, but following the paths of 100's of trees is much harder. To achieve both performance and interpretability, some model compression techniques allow transforming a random forest into a minimal \"born-again\" decision tree that faithfully reproduces the same decision function."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity.In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data (but the resulting classification tree can be an input for decision making)."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity.In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data (but the resulting classification tree can be an input for decision making)."}, {"text": "One of the questions that arises in a decision tree algorithm is the optimal size of the final tree. A tree that is too large risks overfitting the training data and poorly generalizing to new samples. A small tree might not capture important structural information about the sample space."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}]}, {"question": "Why do coefficients change in multiple regression", "positive_ctxs": [{"text": "If there are other predictor variables, all coefficients will be changed. All the coefficients are jointly estimated, so every new variable changes all the other coefficients already in the model. This is one reason we do multiple regression, to estimate coefficient B1 net of the effect of variable Xm."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor."}, {"text": "After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor."}, {"text": "After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor."}, {"text": "In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient \u2013 the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t test."}, {"text": "In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient \u2013 the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t test."}, {"text": "In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient \u2013 the odds ratio (see definition). In linear regression, the significance of a regression coefficient is assessed by computing a t test."}, {"text": "Under rescaling, when the system is shrunk by a factor of (1+b), the t coefficient scales up by a factor (1+b)2 by dimensional analysis. The change in t for infinitesimal b is 2bt. The other two coefficients are dimensionless and do not change at all."}]}, {"question": "What are random errors", "positive_ctxs": [{"text": "Random errors are statistical fluctuations (in either direction) in the measured data due to the precision limitations of the measurement device. Random errors usually result from the experimenter's inability to take the same measurement in exactly the same way to get exact the same number."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, note that the estimates for the items obtained in the second experiment have errors that correlate with each other."}, {"text": "However, across a large number of individuals, the causes of measurement error are assumed to be so varied that measure errors act as random variables.If errors have the essential characteristics of random variables, then it is reasonable to assume that errors are equally likely to be positive or negative, and that they are not correlated with true scores or with errors on other tests."}, {"text": "The central assumption of reliability theory is that measurement errors are essentially random. This does not mean that errors arise from random processes. For any individual, an error in measurement is not a completely random event."}, {"text": "Survey results are typically subject to some error. Total errors can be classified into sampling errors and non-sampling errors. The term \"error\" here includes systematic biases as well as random errors."}, {"text": "Survey results are typically subject to some error. Total errors can be classified into sampling errors and non-sampling errors. The term \"error\" here includes systematic biases as well as random errors."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The random or stochastic error in a measurement is the error that is random from one measurement to the next. Stochastic errors tend to be normally distributed when the stochastic error is the sum of many independent random errors because of the central limit theorem. Stochastic errors added to a regression equation account for the variation in Y that cannot be explained by the included Xs."}]}, {"question": "Why is sigmoid a good activation function", "positive_ctxs": [{"text": "The main reason why we use sigmoid function is because it exists between (0 to 1). Therefore, it is especially used for models where we have to predict the probability as an output. Since probability of anything exists only between the range of 0 and 1, sigmoid is the right choice. The function is differentiable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A sigmoid function is a mathematical function having a characteristic \"S\"-shaped curve or sigmoid curve. A common example of a sigmoid function is the logistic function shown in the first figure and defined by the formula:"}, {"text": "A sigmoid function is a mathematical function having a characteristic \"S\"-shaped curve or sigmoid curve. A common example of a sigmoid function is the logistic function shown in the first figure and defined by the formula:"}, {"text": "A sigmoid function is a mathematical function having a characteristic \"S\"-shaped curve or sigmoid curve. A common example of a sigmoid function is the logistic function shown in the first figure and defined by the formula:"}, {"text": "A sigmoid function is a bounded, differentiable, real function that is defined for all real input values and has a non-negative derivative at each point and exactly one inflection point. A sigmoid \"function\" and a sigmoid \"curve\" refer to the same object."}, {"text": "A sigmoid function is a bounded, differentiable, real function that is defined for all real input values and has a non-negative derivative at each point and exactly one inflection point. A sigmoid \"function\" and a sigmoid \"curve\" refer to the same object."}, {"text": "A sigmoid function is a bounded, differentiable, real function that is defined for all real input values and has a non-negative derivative at each point and exactly one inflection point. A sigmoid \"function\" and a sigmoid \"curve\" refer to the same object."}, {"text": "Usually each input is separately weighted, and the sum is passed through a non-linear function known as an activation function or transfer function. The transfer functions usually have a sigmoid shape, but they may also take the form of other non-linear functions, piecewise linear functions, or step functions. They are also often monotonically increasing, continuous, differentiable and bounded."}]}, {"question": "Is it possible to use ensemble learning for time series forecast", "positive_ctxs": [{"text": "Ensemble learning methods are widely used nowadays for its predictive performance improvement. Ensemble learning combines multiple predictions (forecasts) from one or multiple methods to overcome accuracy of simple prediction and to avoid possible overfit."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Population projections are produced in advance of the date they are for. They use time series analysis of existing census data and other sources of population information to forecast the size of future populations. Because there are unknown factors that may affect future population changes, population projections often incorporate high and low as well as expected values for future populations."}, {"text": "Forecast skill for single-value forecasts (i.e., time series of a scalar quantity) is commonly represented in terms of metrics such as correlation, root mean squared error, mean absolute error, relative mean absolute error, bias, and the Brier score, among others. A number of scores associated with the concept of entropy in information theory are also being used.The term 'forecast skill' may also be used qualitatively, in which case it could either refer to forecast performance according to a single metric or to the overall forecast performance based on multiple metrics."}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}, {"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}, {"text": "Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. While regression analysis is often employed in such a way as to test relationships between one more different time series, this type of analysis is not usually called \"time series analysis,\" which refers in particular to relationships between different points in time within a single series."}]}, {"question": "What is the use of finding the root mean square error", "positive_ctxs": [{"text": "The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is a frequently used measure of the differences between values (sample or population values) predicted by a model or an estimator and the values observed."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error."}, {"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}, {"text": "Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. In other words, the standard error of the mean is a measure of the dispersion of sample means around the population mean."}]}, {"question": "In importance sampling what is the difference between p x and q x", "positive_ctxs": [{"text": "The distribution pX (x) is called the target distribution, while qX (x) is the sampling distribution or the proposal distribution."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Difference, x \u2212 y: The difference of two points x and y is the n-tuple that has ones where x and y differ and zeros elsewhere. It is the bitwise 'exclusive or': x \u2212 y = x \u2295 y. The difference commutes: x \u2212 y = y \u2212 x."}, {"text": "The compound p \u2192 q is false if and only if p is true and q is false. By the same stroke, p \u2192 q is true if and only if either p is false or q is true (or both). The \u2192 symbol is a function that uses pairs of truth values of the components p, q (e.g., p is True, q is True ... p is False, q is False) and maps it to the truth values of the compound p \u2192 q."}, {"text": "It merely means \"if p is true then q is also true\", such that the statement p \u2192 q is false only when p is true and q is false. In a bivalent truth table of p \u2192 q, if p is false then p \u2192 q is true, regardless of whether q is true or false (Latin phrase: ex falso quodlibet) since (1) p \u2192 q is always true as long as q is true, and (2) p \u2192 q is true when both p and q are false. This truth table is useful in proving some mathematical theorems (e.g., defining a subset)."}, {"text": "Solomonoff's universal prior probability of any prefix p of a computable sequence x is the sum of the probabilities of all programs (for a universal computer) that compute something starting with p. Given some p and any computable but unknown probability distribution from which x is sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts of x in optimal fashion."}, {"text": "Values for the skewness and excess kurtosis below the lower boundary (excess kurtosis + 2 \u2212 skewness2 = 0) cannot occur for any distribution, and hence Karl Pearson appropriately called the region below this boundary the \"impossible region.\" The boundary for this \"impossible region\" is determined by (symmetric or skewed) bimodal \"U\"-shaped distributions for which parameters \u03b1 and \u03b2 approach zero and hence all the probability density is concentrated at the ends: x = 0, 1 with practically nothing in between them. Since for \u03b1 \u2248 \u03b2 \u2248 0 the probability density is concentrated at the two ends x = 0 and x = 1, this \"impossible boundary\" is determined by a 2-point distribution: the probability can only take 2 values (Bernoulli distribution), one value with probability p and the other with probability q = 1\u2212p."}, {"text": "In instances of modus ponens we assume as premises that p \u2192 q is true and p is true. Only one line of the truth table\u2014the first\u2014satisfies these two conditions (p and p \u2192 q). On this line, q is also true."}, {"text": "In instances of modus tollens we assume as premises that p \u2192 q is true and q is false. There is only one line of the truth table\u2014the fourth line\u2014which satisfies these two conditions. In this line, p is false."}]}, {"question": "Why does regression line go through mean", "positive_ctxs": [{"text": "If there is no relationship between X and Y, the best guess for all values of X is the mean of Y. At any rate, the regression line always passes through the means of X and Y. This means that, regardless of the value of the slope, when X is at its mean, so is Y."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In regression analysis, overfitting occurs frequently. As an extreme example, if there are p variables in a linear regression with p data points, the fitted line can go exactly through every point. For logistic regression or Cox proportional hazards models, there are a variety of rules of thumb (e.g."}, {"text": "In regression analysis, overfitting occurs frequently. As an extreme example, if there are p variables in a linear regression with p data points, the fitted line can go exactly through every point. For logistic regression or Cox proportional hazards models, there are a variety of rules of thumb (e.g."}, {"text": "A basic tool for econometrics is the multiple linear regression model. In modern econometrics, other statistical tools are frequently used, but linear regression is still the most frequently used starting point for an analysis. Estimating a linear regression on two variables can be visualised as fitting a line through data points representing paired values of the independent and dependent variables."}, {"text": "Non-convex penalties - Penalties can be constructed such that A is constrained to be a graph Laplacian, or that A has low rank factorization. However these penalties are not convex, and the analysis of the barrier method proposed by Ciliberto et al. does not go through in these cases."}, {"text": "Sometimes it is appropriate to force the regression line to pass through the origin, because x and y are assumed to be proportional. For the model without the intercept term, y = \u03b2x, the OLS estimator for \u03b2 simplifies to"}, {"text": "Sometimes it is appropriate to force the regression line to pass through the origin, because x and y are assumed to be proportional. For the model without the intercept term, y = \u03b2x, the OLS estimator for \u03b2 simplifies to"}, {"text": "In this case one might proceed by regressing the data against the quantiles of a normal distribution with the same mean and variance as the sample. Lack of fit to the regression line suggests a departure from normality (see Anderson Darling coefficient and minitab)."}]}, {"question": "How do you find the correlation coefficient between two sets of data", "positive_ctxs": [{"text": "How to Calculate a CorrelationFind the mean of all the x-values.Find the standard deviation of all the x-values (call it sx) and the standard deviation of all the y-values (call it sy). For each of the n pairs (x, y) in the data set, take.Add up the n results from Step 3.Divide the sum by sx \u2217 sy.More items"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, the Pearson correlation coefficient (PCC, pronounced ), also referred to as Pearson's r, the Pearson product-moment correlation coefficient (PPMCC), or the bivariate correlation, is a measure of linear correlation between two sets of data. It is the covariance of two variables, divided by the product of their standard deviations; thus it is essentially a normalised measurement of the covariance, such that the result always has a value between -1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationship or correlation."}, {"text": "In statistics, the Pearson correlation coefficient (PCC, pronounced ), also referred to as Pearson's r, the Pearson product-moment correlation coefficient (PPMCC), or the bivariate correlation, is a measure of linear correlation between two sets of data. It is the covariance of two variables, divided by the product of their standard deviations; thus it is essentially a normalised measurement of the covariance, such that the result always has a value between -1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationship or correlation."}, {"text": "Under heavy noise conditions, extracting the correlation coefficient between two sets of stochastic variables is nontrivial, in particular where Canonical Correlation Analysis reports degraded correlation values due to the heavy noise contributions. A generalization of the approach is given elsewhere.In case of missing data, Garren derived the maximum likelihood estimator."}, {"text": "Under heavy noise conditions, extracting the correlation coefficient between two sets of stochastic variables is nontrivial, in particular where Canonical Correlation Analysis reports degraded correlation values due to the heavy noise contributions. A generalization of the approach is given elsewhere.In case of missing data, Garren derived the maximum likelihood estimator."}, {"text": "For example, Spearman's rank correlation coefficient is useful to measure the statistical dependence between the rankings of athletes in two tournaments. And the Kendall rank correlation coefficient is another approach."}, {"text": "Given a set of data that contains information on medical patients your goal is to find correlation for a disease. Before you can start iterating through the data ensure that you have an understanding of the result, are you looking for patients who have the disease? Are there other diseases that can be the cause?"}, {"text": "If we compute the Pearson correlation coefficient between variables X and Y, the result is approximately 0.970, while if we compute the partial correlation between X and Y, using the formula given above, we find a partial correlation of 0.919. The computations were done using R with the following code."}]}, {"question": "How does selection bias affect results", "positive_ctxs": [{"text": "Selection bias can result when the selection of subjects into a study or their likelihood of being retained in the study leads to a result that is different from what you would have gotten if you had enrolled the entire target population."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A group of 20 students spends between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability of the student passing the exam?"}, {"text": "A group of 20 students spends between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability of the student passing the exam?"}, {"text": "A group of 20 students spends between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability of the student passing the exam?"}, {"text": "In science and engineering, a bias is a systematic error. Statistical bias results from an unfair sampling of a population, or from an estimation process that does not give accurate results on average."}, {"text": "Two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of the other (equivalently, does not affect the odds). Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other."}, {"text": "Two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of the other (equivalently, does not affect the odds). Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other."}, {"text": "Two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of the other (equivalently, does not affect the odds). Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other."}]}, {"question": "What is meant by correlation and regression analysis", "positive_ctxs": [{"text": "Regression analysis refers to assessing the relationship between the outcome variable and one or more variables. For example, a correlation of r = 0.8 indicates a positive and strong association among two variables, while a correlation of r = -0.3 shows a negative and weak association."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "This is equal to the formula given above. As a correlation coefficient, the Matthews correlation coefficient is the geometric mean of the regression coefficients of the problem and its dual. The component regression coefficients of the Matthews correlation coefficient are Markedness (\u0394p) and Youden's J statistic (Informedness or \u0394p')."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "where \u03a6(\u00b7) is the cumulative distribution function of a Gaussian distribution with zero mean and unit standard deviation, and N is the sample size. This z-transform is approximate and that the actual distribution of the sample (partial) correlation coefficient is not straightforward. However, an exact t-test based on a combination of the partial regression coefficient, the partial correlation coefficient and the partial variances is available.The distribution of the sample partial correlation was described by Fisher."}, {"text": "Until a more analytical solution to MAUP is discovered, spatial sensitivity analysis using a variety of areal units is recommended as a methodology to estimate the uncertainty of correlation and regression coefficients due to ecological bias. An example of data simulation and re-aggregation using the ArcPy library is available.In transport planning, MAUP is associated to Traffic Analysis Zoning (TAZ). A major point of departure in understanding problems in transportation analysis is the recognition that spatial analysis has some limitations associated with the discretization of space."}, {"text": "The two measures in the study are taken at the same time. This is in contrast to predictive validity, where one measure occurs earlier and is meant to predict some later measure. In both cases, the (concurrent) predictive power of the test is analyzed using a simple correlation or linear regression."}, {"text": "Regression analysis is primarily used for two conceptually distinct purposes. First, regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Second, in some situations regression analysis can be used to infer causal relationships between the independent and dependent variables."}]}, {"question": "What does standard error of estimate tell you", "positive_ctxs": [{"text": "The standard error of the regression (S), also known as the standard error of the estimate, represents the average distance that the observed values fall from the regression line. Conveniently, it tells you how wrong the regression model is on average using the units of the response variable."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "is equal to square root of the j-th diagonal element of this matrix. The estimate of this standard error is obtained by replacing the unknown quantity \u03c32 with its estimate s2."}, {"text": "is equal to square root of the j-th diagonal element of this matrix. The estimate of this standard error is obtained by replacing the unknown quantity \u03c32 with its estimate s2."}, {"text": "For example, a poll's standard error (what is reported as the margin of error of the poll), is the expected standard deviation of the estimated mean if the same poll were to be conducted multiple times. Thus, the standard error estimates the standard deviation of an estimate, which itself measures how much the estimate depends on the particular sample that was taken from the population."}, {"text": "For example, a poll's standard error (what is reported as the margin of error of the poll), is the expected standard deviation of the estimated mean if the same poll were to be conducted multiple times. Thus, the standard error estimates the standard deviation of an estimate, which itself measures how much the estimate depends on the particular sample that was taken from the population."}]}, {"question": "Can a sampling frame be seen as a population", "positive_ctxs": [{"text": "A sampling frame is a list of all the items in your population. It's a complete list of everyone or everything you want to study. The difference between a population and a sampling frame is that the population is general and the frame is specific."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In statistics, a sampling frame is the source material or device from which a sample is drawn. It is a list of all those within a population who can be sampled, and may include individuals, households or institutions.Importance of the sampling frame is stressed by Jessen and Salant and Dillman."}, {"text": "As a remedy, we seek a sampling frame which has the property that we can identify every single element and include any in our sample. The most straightforward type of frame is a list of elements of the population (preferably the entire population) with appropriate contact information. For example, in an opinion poll, possible sampling frames include an electoral register and a telephone directory."}, {"text": "As a remedy, we seek a sampling frame which has the property that we can identify every single element and include any in our sample. The most straightforward type of frame is a list of elements of the population (preferably the entire population) with appropriate contact information. For example, in an opinion poll, possible sampling frames include an electoral register and a telephone directory."}, {"text": "The most straightforward type of frame is a list of elements of the population (preferably the entire population) with appropriate contact information. For example, in an opinion poll, possible sampling frames include an electoral register or a telephone directory. Other sampling frames can include employment records, school class lists, patient files in a hospital, organizations listed in a thematic database, and so on."}, {"text": "When the population embraces a number of distinct categories, the frame can be organized by these categories into separate \"strata.\" Each stratum is then sampled as an independent sub-population, out of which individual elements can be randomly selected. The ratio of the size of this random selection (or sample) to the size of the population is called a sampling fraction."}, {"text": "When the population embraces a number of distinct categories, the frame can be organized by these categories into separate \"strata.\" Each stratum is then sampled as an independent sub-population, out of which individual elements can be randomly selected. The ratio of the size of this random selection (or sample) to the size of the population is called a sampling fraction."}, {"text": "The control chart is intended as a heuristic. Deming insisted that it is not a hypothesis test and is not motivated by the Neyman\u2013Pearson lemma. He contended that the disjoint nature of population and sampling frame in most industrial situations compromised the use of conventional statistical techniques."}]}, {"question": "What is considered active activity level", "positive_ctxs": [{"text": "Fewer than 1,000 steps a day is sedentary. 1,000 to 10,000 steps or about 4 miles a day is Lightly Active. 10,000 to 23,000 steps or 4 to 10 miles a day is considered Active. More than 23,000 steps or 10 miles a day is Highly active."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Since a constant barrage of synaptic activity would approximate a constant current injection, the overall level of synaptic activity in the distal apical dendrite could set the depolarization level of the entire neuron. When a more efficient proximal synaptic activity is superimposed upon a sub-threshold depolarization due to distal activity, the cell has a high probability of firing an AP. In CA3, it is the perforant path projection from the entorhinal cortical cells that provides synaptic input to the most distal dendrites of the pyramidal cells."}, {"text": "The activity of neurons in the brain can be modelled statistically. Each neuron at any time is either active + or inactive \u2212. The active neurons are those that send an action potential down the axon in any given time window, and the inactive ones are those that do not."}, {"text": "Ultimately, biological neuron models aim to explain the mechanisms underlying the operation of the nervous system. Modeling helps to analyze experimental data and address questions such as: How are the spikes of a neuron related to sensory stimulation or motor activity such as arm movements? What is the neural code used by the nervous system?"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "The post-synaptic structure is driven in part by signals from incoming afferent fibers and through life there is plasticity in the synapses.The formation of these arbors is regulated by the strength of local signals during development. Several patterns in activity control the development of the brain. Action potential changes in the retina, hippocampus, cortex, and spinal cord provide activity-based signals both to the active neurons and their post-synaptic target cells."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}]}, {"question": "What is uniform convergence series", "positive_ctxs": [{"text": "A series converges uniformly on if the sequence of partial sums defined by. (2) converges uniformly on . To test for uniform convergence, use Abel's uniform convergence test or the Weierstrass M-test."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "denotes the limit of the corresponding finite partial sums of the sequence (fi)i\u2208N of elements of V. For example, the fi could be (real or complex) functions belonging to some function space V, in which case the series is a function series. The mode of convergence of the series depends on the topology imposed on the function space. In such cases, pointwise convergence and uniform convergence are two prominent examples."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "A way to ensure the existence of limits of certain infinite series is to restrict attention to spaces where any Cauchy sequence has a limit; such a vector space is called complete. Roughly, a vector space is complete provided that it contains all necessary limits. For example, the vector space of polynomials on the unit interval [0,1], equipped with the topology of uniform convergence is not complete because any continuous function on [0,1] can be uniformly approximated by a sequence of polynomials, by the Weierstrass approximation theorem."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "The dominance condition can be employed in the case of i.i.d. case, the uniform convergence in probability can be checked by showing that the sequence"}, {"text": "The dominance condition can be employed in the case of i.i.d. case, the uniform convergence in probability can be checked by showing that the sequence"}, {"text": "The dominance condition can be employed in the case of i.i.d. case, the uniform convergence in probability can be checked by showing that the sequence"}]}, {"question": "What is meant by machine learning algorithms", "positive_ctxs": [{"text": "Machine learning algorithms are the engines of machine learning, meaning it is the algorithms that turn a data set into a model. Which kind of algorithm works best (supervised, unsupervised, classification, regression, etc.)"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "Notoriety: Despite their age, LCS algorithms are still not widely known even in machine learning communities. As a result, LCS algorithms are rarely considered in comparison to other established machine learning approaches. This is likely due to the following factors: (1) LCS is a relatively complicated algorithmic approach, (2) LCS, rule-based modeling is a different paradigm of modeling than almost all other machine learning approaches."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}, {"text": "The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms."}]}, {"question": "Is transfer learning unsupervised", "positive_ctxs": [{"text": "Transfer learning without any labeled data from the target domain is referred to as unsupervised transfer learning."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In 1976 Stevo Bozinovski and Ante Fulgosi published a paper explicitly addressing transfer learning in neural networks training. The paper gives a mathematical and geometrical model of transfer learning. In 1981 a report was given on application of transfer learning in training a neural network on a dataset of images representing letters of computer terminals."}, {"text": "The history of learning vector-valued functions is closely linked to transfer learning- storing knowledge gained while solving one problem and applying it to a different but related problem. The fundamental motivation for transfer learning in the field of machine learning was discussed in a NIPS-95 workshop on \u201cLearning to Learn,\u201d which focused on the need for lifelong machine learning methods that retain and reuse previously learned knowledge. Research on transfer learning has attracted much attention since 1995 in different names: learning to learn, lifelong learning, knowledge transfer, inductive transfer, multitask learning, knowledge consolidation, context-sensitive learning, knowledge-based inductive bias, metalearning, and incremental/cumulative learning."}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "Both positive and negative transfer learning was experimentally demonstrated.In 1993, Lorien Pratt published a paper on transfer in machine learning, formulating the discriminability-based transfer (DBT) algorithm.In 1997, the journal Machine Learning published a special issue devoted to transfer learning, and by 1998, the field had advanced to include multi-task learning, along with a more formal analysis of its theoretical foundations. Learning to Learn, edited by Pratt and Sebastian Thrun, is a 1998 review of the subject."}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, such as finding the probability density function. Though unsupervised learning encompasses other domains involving summarizing and explaining data features."}, {"text": "A central application of unsupervised learning is in the field of density estimation in statistics, such as finding the probability density function. Though unsupervised learning encompasses other domains involving summarizing and explaining data features."}]}, {"question": "What does bootstrapping mean in statistics", "positive_ctxs": [{"text": "Bootstrapping is any test or metric that uses random sampling with replacement, and falls under the broader class of resampling methods. Bootstrapping assigns measures of accuracy (bias, variance, confidence intervals, prediction error, etc.) to sample estimates."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}, {"text": "is a model, often in idealized form, of the process that generated by the data. It is a common aphorism in statistics that all models are wrong. Thus, true consistency does not occur in practical applications."}]}, {"question": "What is a logarithmic function definition", "positive_ctxs": [{"text": ": a function (such as y = loga x or y = ln x) that is the inverse of an exponential function (such as y = ax or y = ex) so that the independent variable appears in a logarithm."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "where b is a parameter that indicates which logarithmic function is being used. It is not an argument of the function, and will, for instance, be a constant when considering the derivative"}, {"text": "Logarithms occur in several laws describing human perception:Hick's law proposes a logarithmic relation between the time individuals take to choose an alternative and the number of choices they have. Fitts's law predicts that the time required to rapidly move to a target area is a logarithmic function of the distance to and the size of the target. In psychophysics, the Weber\u2013Fechner law proposes a logarithmic relationship between stimulus and sensation such as the actual vs. the perceived weight of an item a person is carrying."}, {"text": "Several principles define a linear system. The basic definition of linearity is that the output must be a linear function of the inputs, that is"}, {"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}, {"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}, {"text": "In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:"}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}]}, {"question": "What do you mean by constraint satisfaction problem", "positive_ctxs": [{"text": "Constraint satisfaction problems (CSPs) are mathematical questions defined as a set of objects whose state must satisfy a number of constraints or limitations. CSPs represent the entities in a problem as a homogeneous collection of finite constraints over variables, which is solved by constraint satisfaction methods."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "The constraint composite graph is a node-weighted undirected graph associated with a given combinatorial optimization problem posed as a weighted constraint satisfaction problem. Developed and introduced by Satish Kumar Thittamaranahalli (T. K. Satish Kumar), the idea of the constraint composite graph is a big step towards unifying different approaches for exploiting \"structure\" in weighted constraint satisfaction problems.A weighted constraint satisfaction problem (WCSP) is a generalization of a constraint satisfaction problem in which the constraints are no longer \"hard,\" but are extended to specify non-negative costs associated with the tuples. The goal is then to find an assignment of values to all the variables from their respective domains so that the total cost is minimized."}, {"text": "Solving a constraint satisfaction problem on a finite domain is an NP complete problem with respect to the domain size. Research has shown a number of tractable subcases, some limiting the allowed constraint relations, some requiring the scopes of constraints to form a tree, possibly in a reformulated version of the problem. Research has also established relationship of the constraint satisfaction problem with problems in other areas such as finite model theory."}, {"text": "While weighted constraint satisfaction problems are NP-hard to solve in general, several subclasses can be solved in polynomial time when their weighted constraints exhibit specific kinds of numerical structure. Tractable subclasses can also be identified by analyzing the way constraints are placed over the variables. Specifically, a weighted constraint satisfaction problem can be solved in time exponential only in the treewidth of its variable-interaction graph (constraint network)."}, {"text": "Unlike the constraint network, the constraint composite graph provides a unifying framework for representing both the graphical structure of the variable-interactions as well as the numerical structure of the weighted constraints. It can be constructed using a simple polynomial-time procedure; and a given weighted constraint satisfaction problem is reducible to the problem of computing the minimum weighted vertex cover for its associated constraint composite graph. The \"hybrid\" computational properties of the constraint composite graph are reflected in the following two important results:"}, {"text": "A constraint satisfaction problem on such domain contains a set of variables whose values can only be taken from the domain, and a set of constraints, each constraint specifying the allowed values for a group of variables. A solution to this problem is an evaluation of the variables that satisfies all constraints. In other words, a solution is a way for assigning a value to each variable in such a way that all constraints are satisfied by these values."}, {"text": "The techniques used in constraint satisfaction depend on the kind of constraints being considered. Often used are constraints on a finite domain, to the point that constraint satisfaction problems are typically identified with problems based on constraints on a finite domain. Such problems are usually solved via search, in particular a form of backtracking or local search."}]}, {"question": "What is mean by sampling error", "positive_ctxs": [{"text": "A sampling error is a statistical error that occurs when an analyst does not select a sample that represents the entire population of data and the results found in the sample do not represent the results that would be obtained from the entire population."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM).The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This forms a distribution of different means, and this distribution has its own mean and variance."}, {"text": "The sampling error is the error caused by observing a sample instead of the whole population. The sampling error is the difference between a sample statistic used to estimate a population parameter and the actual but unknown value of the parameter."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "In particular, the standard error of a sample statistic (such as sample mean) is the actual or estimated standard deviation of the sample mean in the process by which it was generated. In other words, it is the actual or estimated standard deviation of the sampling distribution of the sample statistic. The notation for standard error can be any one of SE, SEM (for standard error of measurement or mean), or SE."}, {"text": "In particular, the standard error of a sample statistic (such as sample mean) is the actual or estimated standard deviation of the sample mean in the process by which it was generated. In other words, it is the actual or estimated standard deviation of the sampling distribution of the sample statistic. The notation for standard error can be any one of SE, SEM (for standard error of measurement or mean), or SE."}]}, {"question": "How do you fix a vanishing gradient problem", "positive_ctxs": [{"text": "The simplest solution is to use other activation functions, such as ReLU, which doesn't cause a small derivative. Residual networks are another solution, as they provide residual connections straight to earlier layers."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Hardware advances have meant that from 1991 to 2015, computer power (especially as delivered by GPUs) has increased around a million-fold, making standard backpropagation feasible for networks several layers deeper than when the vanishing gradient problem was recognized. Schmidhuber notes that this \"is basically what is winning many of the image recognition competitions now\", but that it \"does not really overcome the problem in a fundamental way\" since the original models tackling the vanishing gradient problem by Hinton and others were trained in a Xeon processor, not GPUs."}, {"text": "Hardware advances have meant that from 1991 to 2015, computer power (especially as delivered by GPUs) has increased around a million-fold, making standard backpropagation feasible for networks several layers deeper than when the vanishing gradient problem was recognized. Schmidhuber notes that this \"is basically what is winning many of the image recognition competitions now\", but that it \"does not really overcome the problem in a fundamental way\" since the original models tackling the vanishing gradient problem by Hinton and others were trained in a Xeon processor, not GPUs."}, {"text": "This allows information from the earlier parts of the network to be passed to the deeper parts of the network, helping maintain signal propagation even in deeper networks. Skip connections are a critical component of what allowed successful training of deeper neural networks. ResNets yielded lower training error (and test error) than their shallower counterparts simply by reintroducing outputs from shallower layers in the network to compensate for the vanishing data.Note that ResNets are an ensemble of relatively shallow nets and do not resolve the vanishing gradient problem by preserving gradient flow throughout the entire depth of the network \u2013 rather, they avoid the problem simply by constructing ensembles of many short networks together."}, {"text": "This allows information from the earlier parts of the network to be passed to the deeper parts of the network, helping maintain signal propagation even in deeper networks. Skip connections are a critical component of what allowed successful training of deeper neural networks. ResNets yielded lower training error (and test error) than their shallower counterparts simply by reintroducing outputs from shallower layers in the network to compensate for the vanishing data.Note that ResNets are an ensemble of relatively shallow nets and do not resolve the vanishing gradient problem by preserving gradient flow throughout the entire depth of the network \u2013 rather, they avoid the problem simply by constructing ensembles of many short networks together."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Long short-term memory (LSTM) is a deep learning system that avoids the vanishing gradient problem. LSTM is normally augmented by recurrent gates called \u201cforget gates\u201d. LSTM prevents backpropagated errors from vanishing or exploding."}]}, {"question": "What is Fisher's exact test used for", "positive_ctxs": [{"text": "Fisher's exact test is a statistical test used to determine if there are nonrandom associations between two categorical variables. . For each one, calculate the associated conditional probability using (2), where the sum of these probabilities must be 1."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "An alternative exact test, Barnard's exact test, has been developed and proponents of it suggest that this method is more powerful, particularly in 2\u00d72 tables. Furthermore, Boschloo's test is an exact test that is uniformly more powerful than Fisher's exact test by construction. Another alternative is to use maximum likelihood estimates to calculate a p-value from the exact binomial or multinomial distributions and reject or fail to reject based on the p-value.For stratified categorical data the Cochran\u2013Mantel\u2013Haenszel test must be used instead of Fisher's test."}, {"text": "An alternative exact test, Barnard's exact test, has been developed and proponents of it suggest that this method is more powerful, particularly in 2\u00d72 tables. Furthermore, Boschloo's test is an exact test that is uniformly more powerful than Fisher's exact test by construction. Another alternative is to use maximum likelihood estimates to calculate a p-value from the exact binomial or multinomial distributions and reject or fail to reject based on the p-value.For stratified categorical data the Cochran\u2013Mantel\u2013Haenszel test must be used instead of Fisher's test."}, {"text": "The test based on the hypergeometric distribution (hypergeometric test) is identical to the corresponding one-tailed version of Fisher's exact test. Reciprocally, the p-value of a two-sided Fisher's exact test can be calculated as the sum of two appropriate hypergeometric tests (for more information see)."}, {"text": "For example, this is the case for Fisher's exact test and also its more powerful alternative, Boschloo's test. If the test statistic is continuous, it will reach the significance level exactly."}, {"text": "Fisher's exact test, based on the work of Ronald Fisher and E. J. G. Pitman in the 1930s, is exact because the sampling distribution (conditional on the marginals) is known exactly. Compare Pearson's chi-squared test, which (although it tests the same null) is not exact because the distribution of the test statistic is correct only asymptotically."}, {"text": "Fisher's exact test is a statistical significance test used in the analysis of contingency tables. Although in practice it is employed when sample sizes are small, it is valid for all sample sizes. It is named after its inventor, Ronald Fisher, and is one of a class of exact tests, so called because the significance of the deviation from a null hypothesis (e.g., P-value) can be calculated exactly, rather than relying on an approximation that becomes exact in the limit as the sample size grows to infinity, as with many statistical tests."}, {"text": "Fisher's exact test is a statistical significance test used in the analysis of contingency tables. Although in practice it is employed when sample sizes are small, it is valid for all sample sizes. It is named after its inventor, Ronald Fisher, and is one of a class of exact tests, so called because the significance of the deviation from a null hypothesis (e.g., P-value) can be calculated exactly, rather than relying on an approximation that becomes exact in the limit as the sample size grows to infinity, as with many statistical tests."}]}, {"question": "What does Gaussian mean", "positive_ctxs": [{"text": ": being or having the shape of a normal curve or a normal distribution."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives?"}, {"text": "A Gaussian process (GP) is a collection of random variables, and any finite number of which have a joint Gaussian (normal) distribution. A GP is defined by a mean function and a covariance function, which specify the mean vectors and covariance matrices for each finite collection of the random variables."}, {"text": "The slow \"standard algorithm\" for k-means clustering, and its associated expectation-maximization algorithm, is a special case of a Gaussian mixture model, specifically, the limiting case when fixing all covariances to be diagonal, equal and have infinitesimal small variance. Instead of small variances, a hard cluster assignment can also be used to show another equivalence of k-means clustering to a special case of \"hard\" Gaussian mixture modelling. This does not mean that it is efficient to use Gaussian mixture modelling to compute k-means, but just that there is a theoretical relationship, and that Gaussian mixture modelling can be interpreted as a generalization of k-means; on the contrary, it has been suggested to use k-means clustering to find starting points for Gaussian mixture modelling on difficult data."}, {"text": "The slow \"standard algorithm\" for k-means clustering, and its associated expectation-maximization algorithm, is a special case of a Gaussian mixture model, specifically, the limiting case when fixing all covariances to be diagonal, equal and have infinitesimal small variance. Instead of small variances, a hard cluster assignment can also be used to show another equivalence of k-means clustering to a special case of \"hard\" Gaussian mixture modelling. This does not mean that it is efficient to use Gaussian mixture modelling to compute k-means, but just that there is a theoretical relationship, and that Gaussian mixture modelling can be interpreted as a generalization of k-means; on the contrary, it has been suggested to use k-means clustering to find starting points for Gaussian mixture modelling on difficult data."}, {"text": "The slow \"standard algorithm\" for k-means clustering, and its associated expectation-maximization algorithm, is a special case of a Gaussian mixture model, specifically, the limiting case when fixing all covariances to be diagonal, equal and have infinitesimal small variance. Instead of small variances, a hard cluster assignment can also be used to show another equivalence of k-means clustering to a special case of \"hard\" Gaussian mixture modelling. This does not mean that it is efficient to use Gaussian mixture modelling to compute k-means, but just that there is a theoretical relationship, and that Gaussian mixture modelling can be interpreted as a generalization of k-means; on the contrary, it has been suggested to use k-means clustering to find starting points for Gaussian mixture modelling on difficult data."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "For example, actors are allowed to pipeline the processing of messages. What this means is that in the course of processing a message M1, an actor can designate the behavior to be used to process the next message, and then in fact begin processing another message M2 before it has finished processing M1. Just because an actor is allowed to pipeline the processing of messages does not mean that it must pipeline the processing."}]}, {"question": "What does a low R squared value mean", "positive_ctxs": [{"text": "A low R-squared value indicates that your independent variable is not explaining much in the variation of your dependent variable - regardless of the variable significance, this is letting you know that the identified independent variable, even though significant, is not accounting for much of the mean of your"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In regression analysis, \"mean squared error\", often referred to as mean squared prediction error or \"out-of-sample mean squared error\", can also refer to the mean value of the squared deviations of the predictions from the true values, over an out-of-sample test space, generated by a model estimated over a particular sample space. This also is a known, computed quantity, and it varies by sample and by out-of-sample test space."}, {"text": "In regression analysis, \"mean squared error\", often referred to as mean squared prediction error or \"out-of-sample mean squared error\", can also refer to the mean value of the squared deviations of the predictions from the true values, over an out-of-sample test space, generated by a model estimated over a particular sample space. This also is a known, computed quantity, and it varies by sample and by out-of-sample test space."}, {"text": "In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors\u2014that is, the average squared difference between the estimated values and the actual value. MSE is a risk function, corresponding to the expected value of the squared error loss. The fact that MSE is almost always strictly positive (and not zero) is because of randomness or because the estimator does not account for information that could produce a more accurate estimate.The MSE is a measure of the quality of an estimator\u2014it is always non-negative, and values closer to zero are better."}, {"text": "In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors\u2014that is, the average squared difference between the estimated values and the actual value. MSE is a risk function, corresponding to the expected value of the squared error loss. The fact that MSE is almost always strictly positive (and not zero) is because of randomness or because the estimator does not account for information that could produce a more accurate estimate.The MSE is a measure of the quality of an estimator\u2014it is always non-negative, and values closer to zero are better."}, {"text": "The use of mean squared error without question has been criticized by the decision theorist James Berger. Mean squared error is the negative of the expected value of one specific utility function, the quadratic utility function, which may not be the appropriate utility function to use under a given set of circumstances. There are, however, some scenarios where mean squared error can serve as a good approximation to a loss function occurring naturally in an application.Like variance, mean squared error has the disadvantage of heavily weighting outliers."}, {"text": "The use of mean squared error without question has been criticized by the decision theorist James Berger. Mean squared error is the negative of the expected value of one specific utility function, the quadratic utility function, which may not be the appropriate utility function to use under a given set of circumstances. There are, however, some scenarios where mean squared error can serve as a good approximation to a loss function occurring naturally in an application.Like variance, mean squared error has the disadvantage of heavily weighting outliers."}, {"text": "The mean absolute error is one of a number of ways of comparing forecasts with their eventual outcomes. Well-established alternatives are the mean absolute scaled error (MASE) and the mean squared error. These all summarize performance in ways that disregard the direction of over- or under- prediction; a measure that does place emphasis on this is the mean signed difference."}]}, {"question": "Is coding required in machine learning", "positive_ctxs": [{"text": "A little bit of coding skills is enough, but it's better to have knowledge of data structures, algorithms, and OOPs concept. Some of the popular programming languages to learn machine learning in are Python, R, Java, and C++."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power."}, {"text": "Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the required computing power."}, {"text": "Sparse coding is a representation learning method which aims at finding a sparse representation of the input data (also known as sparse coding) in the form of a linear combination of basic elements as well as those basic elements themselves. These elements are called atoms and they compose a dictionary. Atoms in the dictionary are not required to be orthogonal, and they may be an over-complete spanning set."}, {"text": "Sparse coding is a representation learning method which aims at finding a sparse representation of the input data (also known as sparse coding) in the form of a linear combination of basic elements as well as those basic elements themselves. These elements are called atoms and they compose a dictionary. Atoms in the dictionary are not required to be orthogonal, and they may be an over-complete spanning set."}, {"text": "Is the yield of good cookies affected by the baking temperature and time in the oven? The table shows data for 8 batches of cookies."}, {"text": "Nonsense coding occurs when one uses arbitrary values in place of the designated \u201c0\u201ds \u201c1\u201ds and \u201c-1\u201ds seen in the previous coding systems. Although it produces correct mean values for the variables, the use of nonsense coding is not recommended as it will lead to uninterpretable statistical results."}, {"text": "Nonsense coding occurs when one uses arbitrary values in place of the designated \u201c0\u201ds \u201c1\u201ds and \u201c-1\u201ds seen in the previous coding systems. Although it produces correct mean values for the variables, the use of nonsense coding is not recommended as it will lead to uninterpretable statistical results."}]}, {"question": "Why is NLP difficult", "positive_ctxs": [{"text": "Natural Language processing is considered a difficult problem in computer science. It's the nature of the human language that makes NLP difficult. While humans can easily master a language, the ambiguity and imprecise characteristics of the natural languages are what make NLP difficult for machines to implement."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Akaike information criterion (AIC) method of model selection, and a comparison with MML: Dowe, D.L. ; Gardner, S.; Oppy, G. (Dec 2007). Why Simplicity is no Problem for Bayesians\"."}, {"text": "An interesting fact is that the original wiki software was created in 1995, but it took at least another six years for large wiki-based collaborative projects to appear. Why did it take so long? One explanation is that the original wiki software lacked a selection operation and hence couldn't effectively support content evolution."}, {"text": "Given the size of many NLPs arising from a direct method, it may appear somewhat counter-intuitive that solving the nonlinear optimization problem is easier than solving the boundary-value problem. It is, however, the fact that the NLP is easier to solve than the boundary-value problem. The reason for the relative ease of computation, particularly of a direct collocation method, is that the NLP is sparse and many well-known software programs exist (e.g., SNOPT) to solve large sparse NLPs."}, {"text": "Not an NLP task proper but an extension of Natural Language Generation and other NLP tasks is the creation of full-fledged books. The first machine-generated book was created by a rule-based system in 1984 (Racter, The policeman's beard is half-constructed). The first published work by a neural network was published in 2018, 1 the Road, marketed as a novel, contains sixty million words."}, {"text": "Not an NLP task proper but an extension of Natural Language Generation and other NLP tasks is the creation of full-fledged books. The first machine-generated book was created by a rule-based system in 1984 (Racter, The policeman's beard is half-constructed). The first published work by a neural network was published in 2018, 1 the Road, marketed as a novel, contains sixty million words."}, {"text": "Not an NLP task proper but an extension of Natural Language Generation and other NLP tasks is the creation of full-fledged books. The first machine-generated book was created by a rule-based system in 1984 (Racter, The policeman's beard is half-constructed). The first published work by a neural network was published in 2018, 1 the Road, marketed as a novel, contains sixty million words."}, {"text": "Not an NLP task proper but an extension of Natural Language Generation and other NLP tasks is the creation of full-fledged books. The first machine-generated book was created by a rule-based system in 1984 (Racter, The policeman's beard is half-constructed). The first published work by a neural network was published in 2018, 1 the Road, marketed as a novel, contains sixty million words."}]}, {"question": "How does a high pass RC filter work", "positive_ctxs": [{"text": "A high-pass filter (HPF) is an electronic filter that passes signals with a frequency higher than a certain cutoff frequency and attenuates signals with frequencies lower than the cutoff frequency. The amount of attenuation for each frequency depends on the filter design."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A popular circuit implementing a second order active R-C filter is the Sallen-Key design, whose schematic diagram is shown here. This topology can be adapted to produce low-pass, band-pass, and high pass filters."}, {"text": "How much does the ball cost?\" many subjects incorrectly answer $0.10. An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do."}, {"text": "An alternative to the RTS algorithm is the modified Bryson\u2013Frazier (MBF) fixed interval smoother developed by Bierman. This also uses a backward pass that processes data saved from the Kalman filter forward pass. The equations for the backward pass involve the recursive"}, {"text": "However, a recursive filter does not always have an infinite impulse response. Some implementations of moving average filter are recursive filters but with a finite impulse response."}, {"text": "For example, if an image contains a low amount of noise but with relatively high magnitude, then a median filter may be more appropriate."}, {"text": "rankletsNonlinear filter also occupy a decisive position in the image processing functions. In a typical pipeline for real-time image processing, it is common to have many nonlinear filter included to form, shape, detect, and manipulate image information. Furthermore, each of these filter types can be parameterized to work one way under certain circumstances and another way under a different set of circumstance using adaptive filter rule generation."}, {"text": "In signal processing, a nonlinear (or non-linear) filter is a filter whose output is not a linear function of its input. That is, if the filter outputs signals R and S for two input signals r and s separately, but does not always output \u03b1R + \u03b2S when the input is a linear combination \u03b1r + \u03b2s."}]}, {"question": "What is analysis of variance example", "positive_ctxs": [{"text": "For example, a two-way ANOVA allows a company to compare worker productivity based on two independent variables, such as salary and skill set. It is utilized to observe the interaction between the two factors and tests the effect of two factors at the same time."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified."}, {"text": "When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified."}, {"text": "When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified."}, {"text": "When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified."}, {"text": "When the variance is unknown, analysis may be done directly in terms of the variance, or in terms of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of precision is that the analysis of most cases is simplified."}]}, {"question": "What are some best practices for training machine learning models", "positive_ctxs": [{"text": "Best practices \u2013 Machine Learning models and applicationsIdentify the business problem and the right success metrics. Begin with it. Gather correct data. Move the algorithms instead of your data. Initiate tests before the actual launch. Avoid data dropping while machine learning algorithms train. Keep away from objectives that are unaligned. Keep using codes.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Operational errors are inadvertent human errors. Operational errors occur when associates do not follow existing business best practices and policies or a company lacks the proper best practices and policies to ensure work is performed with minimal human error. Operational errors also occur due to a lack of proper training for associates."}, {"text": "Performing machine learning involves creating a model, which is trained on some training data and then can process additional data to make predictions. Various types of models have been used and researched for machine learning systems."}, {"text": "Performing machine learning involves creating a model, which is trained on some training data and then can process additional data to make predictions. Various types of models have been used and researched for machine learning systems."}, {"text": "Performing machine learning involves creating a model, which is trained on some training data and then can process additional data to make predictions. Various types of models have been used and researched for machine learning systems."}, {"text": "Performing machine learning involves creating a model, which is trained on some training data and then can process additional data to make predictions. Various types of models have been used and researched for machine learning systems."}, {"text": "Performing machine learning involves creating a model, which is trained on some training data and then can process additional data to make predictions. Various types of models have been used and researched for machine learning systems."}, {"text": "Performing machine learning involves creating a model, which is trained on some training data and then can process additional data to make predictions. Various types of models have been used and researched for machine learning systems."}]}, {"question": "How do you find the similarity between two vectors", "positive_ctxs": [{"text": "Cosine similarity measures the similarity between two vectors of an inner product space. It is measured by the cosine of the angle between two vectors and determines whether two vectors are pointing in roughly the same direction. It is often used to measure document similarity in text analysis."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "It is thus a judgment of orientation and not magnitude: two vectors with the same orientation have a cosine similarity of 1, two vectors oriented at 90\u00b0 relative to each other have a similarity of 0, and two vectors diametrically opposed have a similarity of -1, independent of their magnitude. The cosine similarity is particularly used in positive space, where the outcome is neatly bounded in"}, {"text": "It is thus a judgment of orientation and not magnitude: two vectors with the same orientation have a cosine similarity of 1, two vectors oriented at 90\u00b0 relative to each other have a similarity of 0, and two vectors diametrically opposed have a similarity of -1, independent of their magnitude. The cosine similarity is particularly used in positive space, where the outcome is neatly bounded in"}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}, {"text": "before you compare it with the document vectors in the low-dimensional space. You can do the same for pseudo term vectors:"}, {"text": "Cosine similarity is a measure of similarity between two non-zero vectors of an inner product space. It is defined to equal the cosine of the angle between them, which is also the same as the inner product of the same vectors normalized to both have length 1. The cosine of 0\u00b0 is 1, and it is less than 1 for any angle in the interval (0, \u03c0] radians."}]}, {"question": "Which machine learning technique is used for pattern recognition", "positive_ctxs": [{"text": "Train the model using a suitable machine learning algorithm such as SVM (Support Vector Machines), decision trees, random forest, etc. Training is the process through which the model learns or recognizes the patterns in the given data for making suitable predictions. The test set contains already predicted values."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Homoscedastic distributions are especially useful to derive statistical pattern recognition and machine learning algorithms. One popular example of an algorithm that assumes homoscedasticity is Fisher's linear discriminant analysis."}, {"text": "Homoscedastic distributions are especially useful to derive statistical pattern recognition and machine learning algorithms. One popular example of an algorithm that assumes homoscedasticity is Fisher's linear discriminant analysis."}, {"text": "Machine learning \u2013 subfield of computer science that examines pattern recognition and computational learning theory in artificial intelligence. There are three broad approaches to machine learning. Supervised learning occurs when the machine is given example inputs and outputs by a teacher so that it can learn a rule that maps inputs to outputs."}, {"text": "In machine learning, pattern recognition is the assignment of a label to a given input value. In statistics, discriminant analysis was introduced for this same purpose in 1936. An example of pattern recognition is classification, which attempts to assign each input value to one of a given set of classes (for example, determine whether a given email is \"spam\" or \"non-spam\")."}, {"text": "In machine learning, pattern recognition is the assignment of a label to a given input value. In statistics, discriminant analysis was introduced for this same purpose in 1936. An example of pattern recognition is classification, which attempts to assign each input value to one of a given set of classes (for example, determine whether a given email is \"spam\" or \"non-spam\")."}, {"text": "The following outline is provided as an overview of and topical guide to machine learning. Machine learning is a subfield of soft computing within computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. In 1959, Arthur Samuel defined machine learning as a \"field of study that gives computers the ability to learn without being explicitly programmed\"."}, {"text": "The following outline is provided as an overview of and topical guide to machine learning. Machine learning is a subfield of soft computing within computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. In 1959, Arthur Samuel defined machine learning as a \"field of study that gives computers the ability to learn without being explicitly programmed\"."}]}, {"question": "How do you find second moment in statistics", "positive_ctxs": [{"text": "The 2nd moment around the mean = \u03a3(xi \u2013 \u03bcx)2. The second is the variance. In practice, only the first two moments are ever used in statistics."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "If the function is a probability distribution, then the zeroth moment is the total probability (i.e. one), the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. The mathematical concept is closely related to the concept of moment in physics."}, {"text": "It is a common practice to use a one-tailed hypothesis by default. However, \"If you do not have a specific direction firmly in mind in advance, use a two-sided alternative. Moreover, some users of statistics argue that we should always work with the two-sided alternative."}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "where \u03bc is the mean, \u03c3 is the standard deviation, E is the expectation operator, \u03bc3 is the third central moment, and \u03bat are the t-th cumulants. It is sometimes referred to as Pearson's moment coefficient of skewness, or simply the moment coefficient of skewness, but should not be confused with Pearson's other skewness statistics (see below). The last equality expresses skewness in terms of the ratio of the third cumulant \u03ba3 to the 1.5th power of the second cumulant \u03ba2."}]}, {"question": "What is Internal Consistency in testing", "positive_ctxs": [{"text": "In statistics and research, internal consistency is typically a measure based on the correlations between different items on the same test (or the same subscale on a larger test). It measures whether several items that propose to measure the same general construct produce similar scores."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Consistency as defined here is sometimes referred to as weak consistency. When we replace convergence in probability with almost sure convergence, then the estimator is said to be strongly consistent. Consistency is related to bias; see bias versus consistency."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "Internal consistency is usually measured with Cronbach's alpha, a statistic calculated from the pairwise correlations between items. Internal consistency ranges between negative infinity and one. Coefficient alpha will be negative whenever there is greater within-subject variability than between-subject variability.A commonly accepted rule of thumb for describing internal consistency is as follows:"}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}, {"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}, {"text": "Ronald J. Brachman; What IS-A is and isn't. An Analysis of Taxonomic Links in Semantic Networks; IEEE Computer, 16 (10); October 1983"}]}, {"question": "How is gravity related to entropy", "positive_ctxs": [{"text": "Gravity tries to keep things together through attraction and thus tends to lower statistical entropy. The universal law of increasing entropy (2nd law of thermodynamics) states that the entropy of an isolated system which is not in equilibrium will tend to increase with time, approaching a maximum value at equilibrium."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "If the universe can be considered to have generally increasing entropy, then \u2013 as Roger Penrose has pointed out \u2013 gravity plays an important role in the increase because gravity causes dispersed matter to accumulate into stars, which collapse eventually into black holes. The entropy of a black hole is proportional to the surface area of the black hole's event horizon. Jacob Bekenstein and Stephen Hawking have shown that black holes have the maximum possible entropy of any object of equal size."}, {"text": "The cross entropy loss is closely related to the Kullback\u2013Leibler divergence between the empirical distribution and the predicted distribution. The cross entropy loss is ubiquitous in modern deep neural networks."}, {"text": "With X1, ..., XN iid random variables, an N-dimensional \"box\" can be constructed with sides X1, ..., XN. Costa and Cover show that the (Shannon) differential entropy h(X) is related to the volume of the typical set (having the sample entropy close to the true entropy), while the Fisher information is related to the surface of this typical set."}, {"text": "For continuous distributions, the Shannon entropy cannot be used, as it is only defined for discrete probability spaces. Instead Edwin Jaynes (1963, 1968, 2003) gave the following formula, which is closely related to the relative entropy (see also differential entropy)."}, {"text": "For continuous distributions, the Shannon entropy cannot be used, as it is only defined for discrete probability spaces. Instead Edwin Jaynes (1963, 1968, 2003) gave the following formula, which is closely related to the relative entropy (see also differential entropy)."}, {"text": "For each center of gravity and each axis, p-value to judge the significance of the difference between the center of gravity and origin.These results are what is called introducing a qualitative variable as supplementary element. This procedure is detailed in and Husson, L\u00ea & Pag\u00e8s 2009 and Pag\u00e8s 2013."}, {"text": "For each center of gravity and each axis, p-value to judge the significance of the difference between the center of gravity and origin.These results are what is called introducing a qualitative variable as supplementary element. This procedure is detailed in and Husson, L\u00ea & Pag\u00e8s 2009 and Pag\u00e8s 2013."}]}, {"question": "What is connectionist AI", "positive_ctxs": [{"text": "Connectionism, an approach to artificial intelligence (AI) that developed out of attempts to understand how the human brain works at the neural level and, in particular, how people learn and remember. (For that reason, this approach is sometimes referred to as neuronlike computing.)"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "In most connectionist models, networks change over time. A closely related and very common aspect of connectionist models is activation. At any time, a unit in the network has an activation, which is a numerical value intended to represent some aspect of the unit."}, {"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What happened is that those structures were then assembled in arrays to keep things nicely organized. This is array of structures (AoS)."}, {"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}]}, {"question": "Is Poisson process a Markov process", "positive_ctxs": [{"text": "An (ordinary) Poisson process is a special Markov process [ref. to Stadje in this volume], in continuous time, in which the only possible jumps are to the next higher state. A Poisson process may also be viewed as a counting process that has particular, desirable, properties."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "This process has the natural numbers as its state space and the non-negative numbers as its index set. This process is also called the Poisson counting process, since it can be interpreted as an example of a counting process.If a Poisson process is defined with a single positive constant, then the process is called a homogeneous Poisson process. The homogeneous Poisson process is a member of important classes of stochastic processes such as Markov processes and L\u00e9vy processes.The homogeneous Poisson process can be defined and generalized in different ways."}, {"text": "This process has the natural numbers as its state space and the non-negative numbers as its index set. This process is also called the Poisson counting process, since it can be interpreted as an example of a counting process.If a Poisson process is defined with a single positive constant, then the process is called a homogeneous Poisson process. The homogeneous Poisson process is a member of important classes of stochastic processes such as Markov processes and L\u00e9vy processes.The homogeneous Poisson process can be defined and generalized in different ways."}, {"text": "Markov processes are stochastic processes, traditionally in discrete or continuous time, that have the Markov property, which means the next value of the Markov process depends on the current value, but it is conditionally independent of the previous values of the stochastic process. In other words, the behavior of the process in the future is stochastically independent of its behavior in the past, given the current state of the process.The Brownian motion process and the Poisson process (in one dimension) are both examples of Markov processes in continuous time, while random walks on the integers and the gambler's ruin problem are examples of Markov processes in discrete time.A Markov chain is a type of Markov process that has either discrete state space or discrete index set (often representing time), but the precise definition of a Markov chain varies. For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space (thus regardless of the nature of time), but it has been also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space)."}, {"text": "Markov processes are stochastic processes, traditionally in discrete or continuous time, that have the Markov property, which means the next value of the Markov process depends on the current value, but it is conditionally independent of the previous values of the stochastic process. In other words, the behavior of the process in the future is stochastically independent of its behavior in the past, given the current state of the process.The Brownian motion process and the Poisson process (in one dimension) are both examples of Markov processes in continuous time, while random walks on the integers and the gambler's ruin problem are examples of Markov processes in discrete time.A Markov chain is a type of Markov process that has either discrete state space or discrete index set (often representing time), but the precise definition of a Markov chain varies. For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space (thus regardless of the nature of time), but it has been also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space)."}, {"text": ", the resulting process is called an inhomogeneous or nonhomogeneous Poisson process, where the average density of points of the process is no longer constant. Serving as a fundamental process in queueing theory, the Poisson process is an important process for mathematical models, where it finds applications for models of events randomly occurring in certain time windows.Defined on the real line, the Poisson process can be interpreted as a stochastic process, among other random objects. But then it can be defined on the"}, {"text": ", the resulting process is called an inhomogeneous or nonhomogeneous Poisson process, where the average density of points of the process is no longer constant. Serving as a fundamental process in queueing theory, the Poisson process is an important process for mathematical models, where it finds applications for models of events randomly occurring in certain time windows.Defined on the real line, the Poisson process can be interpreted as a stochastic process, among other random objects. But then it can be defined on the"}, {"text": "A stochastic process has the Markov property if the conditional probability distribution of future states of the process (conditional on both past and present values) depends only upon the present state; that is, given the present, the future does not depend on the past. A process with this property is said to be Markovian or a Markov process. The most famous Markov process is a Markov chain."}]}, {"question": "Where can cluster analysis be applied", "positive_ctxs": [{"text": "Clustering analysis is broadly used in many applications such as market research, pattern recognition, data analysis, and image processing. Clustering can also help marketers discover distinct groups in their customer base. And they can characterize their customer groups based on the purchasing patterns."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two types:"}, {"text": "Swarm intelligence has also been applied for data mining and cluster analysis. Ant based models are further subject of modern management theory."}]}, {"question": "What is probability of false alarm", "positive_ctxs": [{"text": "The false alarm probability is the probability that exceeds a certain threshold when there is no signal."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In most radar detectors, the threshold is set in order to achieve a required probability of false alarm (or equivalently, false alarm rate or time between false alarms)."}, {"text": "In this case, a changing threshold can be used, where the threshold level is raised and lowered to maintain a constant probability of false alarm. This is known as constant false alarm rate (CFAR) detection."}, {"text": "When performing multiple comparisons in a statistical framework such as above, the false positive ratio (also known as the false alarm ratio, as opposed to false positive rate / false alarm rate ) usually refers to the probability of falsely rejecting the null hypothesis for a particular test. Using the terminology suggested here, it is simply"}, {"text": "When performing multiple comparisons in a statistical framework such as above, the false positive ratio (also known as the false alarm ratio, as opposed to false positive rate / false alarm rate ) usually refers to the probability of falsely rejecting the null hypothesis for a particular test. Using the terminology suggested here, it is simply"}, {"text": "The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings. The true-positive rate is also known as sensitivity, recall or probability of detection in machine learning. The false-positive rate is also known as probability of false alarm and can be calculated as (1 \u2212 specificity)."}, {"text": "The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings. The true-positive rate is also known as sensitivity, recall or probability of detection in machine learning. The false-positive rate is also known as probability of false alarm and can be calculated as (1 \u2212 specificity)."}, {"text": "The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings. The true-positive rate is also known as sensitivity, recall or probability of detection in machine learning. The false-positive rate is also known as probability of false alarm and can be calculated as (1 \u2212 specificity)."}]}, {"question": "How do you know if a classification model is accurate", "positive_ctxs": [{"text": "Classification Accuracy It is the ratio of number of correct predictions to the total number of input samples. It works well only if there are equal number of samples belonging to each class. For example, consider that there are 98% samples of class A and 2% samples of class B in our training set."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Suppose the police officers then stop a driver at random to administer a breathalyzer test. It indicates that the driver is drunk. We assume you do not know anything else about them."}, {"text": "If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute."}, {"text": "The following question was posed to Jeff Hawkins in September 2011 with regard to cortical learning algorithms: \"How do you know if the changes you are making to the model are good or not?\" To which Jeff's response was \"There are two categories for the answer: one is to look at neuroscience, and the other is methods for machine intelligence. In the neuroscience realm, there are many predictions that we can make, and those can be tested."}, {"text": "To conduct a Bayes linear analysis it is necessary to identify some values that you expect to know shortly by making measurements D and some future value which you would like to know B. Here D refers to a vector containing data and B to a vector containing quantities you would like to predict. For the following example B and D are taken to be two-dimensional vectors i.e."}, {"text": "Consider first a traditional Bayesian Analysis where you expect to shortly know D and you would like to know more about some other observable B. In the traditional Bayesian approach it is required that every possible outcome is enumerated i.e. every possible outcome is the cross product of the partition of a set of B and D. If represented on a computer where B requires n bits and D m bits then the number of states required is"}]}, {"question": "What is the difference between supervised and unsupervised machine learning", "positive_ctxs": [{"text": "In a supervised learning model, the algorithm learns on a labeled dataset, providing an answer key that the algorithm can use to evaluate its accuracy on training data. An unsupervised model, in contrast, provides unlabeled data that the algorithm tries to make sense of by extracting features and patterns on its own."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "At the moment, automated learning methods can further separate into supervised and unsupervised machine learning. Patterns extraction with machine learning process annotated and unannotated text have been explored extensively by academic researchers."}, {"text": "The goals of learning are understanding and prediction. Learning falls into many categories, including supervised learning, unsupervised learning, online learning, and reinforcement learning. From the perspective of statistical learning theory, supervised learning is best understood."}, {"text": "Semi-supervised learning is an approach to machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training. Semi-supervised learning falls between unsupervised learning (with no labeled training data) and supervised learning (with only labeled training data)."}, {"text": "Semi-supervised learning is an approach to machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training. Semi-supervised learning falls between unsupervised learning (with no labeled training data) and supervised learning (with only labeled training data)."}, {"text": "Semi-supervised learning is an approach to machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training. Semi-supervised learning falls between unsupervised learning (with no labeled training data) and supervised learning (with only labeled training data)."}]}, {"question": "What is the difference between correlation and correlation coefficient", "positive_ctxs": [{"text": "Correlation is the concept of linear relationship between two variables. Whereas correlation coefficient is a measure that measures linear relationship between two variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formally, the partial correlation between X and Y given a set of n controlling variables Z = {Z1, Z2, ..., Zn}, written \u03c1XY\u00b7Z, is the correlation between the residuals eX and eY resulting from the linear regression of X with Z and of Y with Z, respectively. The first-order partial correlation (i.e., when n = 1) is the difference between a correlation and the product of the removable correlations divided by the product of the coefficients of alienation of the removable correlations. The coefficient of alienation, and its relation with joint variance through correlation are available in Guilford (1973, pp."}, {"text": "The sign of the Spearman correlation indicates the direction of association between X (the independent variable) and Y (the dependent variable). If Y tends to increase when X increases, the Spearman correlation coefficient is positive. If Y tends to decrease when X increases, the Spearman correlation coefficient is negative."}, {"text": "It is a corollary of the Cauchy\u2013Schwarz inequality that the absolute value of the Pearson correlation coefficient is not bigger than 1. Therefore, the value of a correlation coefficient ranges between -1 and +1. The correlation coefficient is +1 in the case of a perfect direct (increasing) linear relationship (correlation), \u22121 in the case of a perfect inverse (decreasing) linear relationship (anticorrelation), and some value in the open interval"}, {"text": "It is a corollary of the Cauchy\u2013Schwarz inequality that the absolute value of the Pearson correlation coefficient is not bigger than 1. Therefore, the value of a correlation coefficient ranges between -1 and +1. The correlation coefficient is +1 in the case of a perfect direct (increasing) linear relationship (correlation), \u22121 in the case of a perfect inverse (decreasing) linear relationship (anticorrelation), and some value in the open interval"}, {"text": "If we compute the Pearson correlation coefficient between variables X and Y, the result is approximately 0.970, while if we compute the partial correlation between X and Y, using the formula given above, we find a partial correlation of 0.919. The computations were done using R with the following code."}, {"text": "This is equal to the formula given above. As a correlation coefficient, the Matthews correlation coefficient is the geometric mean of the regression coefficients of the problem and its dual. The component regression coefficients of the Matthews correlation coefficient are Markedness (\u0394p) and Youden's J statistic (Informedness or \u0394p')."}, {"text": "For example, Spearman's rank correlation coefficient is useful to measure the statistical dependence between the rankings of athletes in two tournaments. And the Kendall rank correlation coefficient is another approach."}]}, {"question": "How are decision trees used for regression", "positive_ctxs": [{"text": "Decision tree builds regression or classification models in the form of a tree structure. It breaks down a dataset into smaller and smaller subsets while at the same time an associated decision tree is incrementally developed. The final result is a tree with decision nodes and leaf nodes."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity.In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data (but the resulting classification tree can be an input for decision making)."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity.In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data (but the resulting classification tree can be an input for decision making)."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}, {"text": "Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making."}]}, {"question": "How do you decrease P value in regression", "positive_ctxs": [{"text": "Increase the power of your analysis.larger sample size.better data collection (reducing error)better/correct model (more complex model, account for covariates, etc.)use a one-sided test instead of a two-sided test."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Of those that survive, at what rate will they die or fail? Can multiple causes of death or failure be taken into account? How do particular circumstances or characteristics increase or decrease the probability of survival?"}, {"text": "Of those that survive, at what rate will they die or fail? Can multiple causes of death or failure be taken into account? How do particular circumstances or characteristics increase or decrease the probability of survival?"}, {"text": "Suppose we have P possible predictors in some model. Vector \u03b3 has a length equal to P and consists of zeros and ones. This vector indicates whether a particular variable is included in the regression or not."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Compensatory fuzzy logic (CFL) is a branch of fuzzy logic with modified rules for conjunction and disjunction. When the truth value of one component of a conjunction or disjunction is increased or decreased, the other component is decreased or increased to compensate. This increase or decrease in truth value may be offset by the increase or decrease in another component."}, {"text": "Another way to do this is to precede the question by information that supports the \"desired\" answer. For example, more people will likely answer \"yes\" to the question \"Given the increasing burden of taxes on middle-class families, do you support cuts in income tax?\" than to the question \"Considering the rising federal budget deficit and the desperate need for more revenue, do you support cuts in income tax?\""}]}, {"question": "What are fixed effects in regression", "positive_ctxs": [{"text": "In many applications including econometrics and biostatistics a fixed effects model refers to a regression model in which the group means are fixed (non-random) as opposed to a random effects model in which the group means are a random sample from a population."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The type of statistical tests that are employed in multilevel models depend on whether one is examining fixed effects or variance components. When examining fixed effects, the tests are compared with the standard error of the fixed effect, which results in a Z-test. A t-test can also be computed."}, {"text": "are individual-specific, time-invariant effects (for example in a panel of countries this could include geography, climate etc.) which are fixed over time., whereas"}, {"text": "provided best linear unbiased estimates (BLUE) of fixed effects and best linear unbiased predictions (BLUP) of random effects. Subsequently, mixed modeling has become a major area of statistical research, including work on computation of maximum likelihood estimates, non-linear mixed effects models, missing data in mixed effects models, and Bayesian estimation of mixed effects models. Mixed models are applied in many disciplines where multiple correlated measurements are made on each unit of interest."}, {"text": "if and only if spring, otherwise equals zero. In the panel data, fixed effects estimator dummies are created for each of the units in cross-sectional data (e.g. firms or countries) or periods in a pooled time-series."}, {"text": "if and only if spring, otherwise equals zero. In the panel data, fixed effects estimator dummies are created for each of the units in cross-sectional data (e.g. firms or countries) or periods in a pooled time-series."}, {"text": "A mixed model, mixed-effects model or mixed error-component model is a statistical model containing both fixed effects and random effects. These models are useful in a wide variety of disciplines in the physical, biological and social sciences."}, {"text": "Different assumptions can be made on the precise structure of this general model. Two important models are the fixed effects model and the random effects model."}]}, {"question": "What is the Fourier transform of a periodic signal", "positive_ctxs": [{"text": "Specifical- ly, for periodic signals we can define the Fourier transform as an impulse train with the impulses occurring at integer multiples of the fundamental frequency and with amplitudes equal to 27r times the Fourier series coefficients."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Discrete-time Fourier transform (DTFT): Equivalent to the Fourier transform of a \"continuous\" function that is constructed from the discrete input function by using the sample values to modulate a Dirac comb. When the sample values are derived by sampling a function on the real line, \u0192(x), the DTFT is equivalent to a periodic summation of the Fourier transform of \u0192. The DTFT output is always periodic (cyclic)."}, {"text": "Let X(f) be the Fourier transform of any function, x(t), whose samples at some interval, T, equal the x[n] sequence. Then the discrete-time Fourier transform (DTFT) is a Fourier series representation of a periodic summation of X(f):"}, {"text": "When the non-zero portion of the input function has finite duration, the Fourier transform is continuous and finite-valued. But a discrete subset of its values is sufficient to reconstruct/represent the portion that was analyzed. The same discrete set is obtained by treating the duration of the segment as one period of a periodic function and computing the Fourier series coefficients."}, {"text": "Since the Fourier transform of the Gaussian function yields a Gaussian function, the signal (preferably after being divided into overlapping windowed blocks) can be transformed with a Fast Fourier transform, multiplied with a Gaussian function and transformed back. This is the standard procedure of applying an arbitrary finite impulse response filter, with the only difference that the Fourier transform of the filter window is explicitly known."}, {"text": "The spectrum analyzer measures the magnitude of the short-time Fourier transform (STFT) of an input signal. If the signal being analyzed can be considered a stationary process, the STFT is a good smoothed estimate of its power spectral density."}, {"text": "In this case the Fourier series is finite and its value is equal to the sampled values at all points. The set of coefficients is known as the discrete Fourier transform (DFT) of the given sample sequence. The DFT is one of the key tools of digital signal processing, a field whose applications include radar, speech encoding, image compression."}, {"text": "This is G, since the Fourier transform of this integral is easy. Each fixed \u03c4 contribution is a Gaussian in x, whose Fourier transform is another Gaussian of reciprocal width in k."}]}, {"question": "What is difference between FIR and IIR filters", "positive_ctxs": [{"text": "The crucial difference between FIR and IIR filter is that the FIR filter provides an impulse response of finite period. As against IIR is a type of filter that generates impulse response of infinite duration for a dynamic system."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A digital IIR filter can generally approximate a desired filter response using less computing power than a FIR filter, however this advantage is more often unneeded given the increasing power of digital processors. The ease of designing and characterizing FIR filters makes them preferable to the filter designer (programmer) when ample computing power is available. Another advantage of FIR filters is that their impulse response can be made symmetric, which implies a response in the frequency domain that has zero phase at all frequencies (not considering a finite delay), which is absolutely impossible with any IIR filter."}, {"text": "Classical analog filters are IIR filters, and classical filter theory centers on the determination of transfer functions given by low order rational functions, which can be synthesized using the same small number of reactive components. Using digital computers, on the other hand, both FIR and IIR filters are straightforward to implement in software."}, {"text": "of a filter can be obtained if the impulse response is known, or directly through analysis using Laplace transforms, or in discrete-time systems the Z-transform. The frequency response also includes the phase as a function of frequency, however in many cases the phase response is of little or no interest. FIR filters can be made to have zero phase, but with IIR filters that is generally impossible."}, {"text": "When the anti-aliasing filter is an IIR design, it relies on feedback from output to input, prior to the second step. With FIR filtering, it is an easy matter to compute only every Mth output. The calculation performed by a decimating FIR filter for the nth output sample is a dot product:"}, {"text": "Thus M low-order FIR filters are each filtering one of M multiplexed phases of the input stream, and the M outputs are being summed. This viewpoint offers a different implementation that might be advantageous in a multi-processor architecture. In other words, the input stream is demultiplexed and sent through a bank of M filters whose outputs are summed."}, {"text": "LTI system theory describes linear time-invariant (LTI) filters of all types. LTI filters can be completely described by their frequency response and phase response, the specification of which uniquely defines their impulse response, and vice versa. From a mathematical viewpoint, continuous-time IIR LTI filters may be described in terms of linear differential equations, and their impulse responses considered as Green's functions of the equation."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}]}, {"question": "What is bivariate in statistics", "positive_ctxs": [{"text": "Bivariate statistics is a type of inferential statistics that deals with the relationship between two variables. When bivariate statistics is employed to examine a relationship between two variables, bivariate data is used. Bivariate data consists of data collected from a sample on two different variables."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The learning of PGMs encoding multivariate distributions is a computationally expensive task, therefore, it is usual for EDAs to estimate multivariate statistics from bivariate statistics. Such relaxation allows PGM to be built in polynomial time in"}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}]}, {"question": "Is the Implicit Association Test having poor test retest reliability", "positive_ctxs": [{"text": "IAT is a popular measure in social psychology to measure the relative strength of association between pairs of concepts (Greenwald, McGhee, & Schwartz, 1998). Studies have found that racial bias IAT studies have a test-retest reliability score of only 0.44, while the IAT overall is just around 0.5."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "As part of the requirements phase, the reliability engineer develops a test strategy with the customer. The test strategy makes trade-offs between the needs of the reliability organization, which wants as much data as possible, and constraints such as cost, schedule and available resources. Test plans and procedures are developed for each reliability test, and results are documented."}, {"text": "Carryover effect, particularly if the interval between test and retest is short. When retested, people may remember their original answer, which could affect answers on the second administration."}, {"text": "Consequential \u2013 What are the potential risks if the scores are invalid or inappropriately interpreted? Is the test still worthwhile given the risks?"}, {"text": "Consequential \u2013 What are the potential risks if the scores are invalid or inappropriately interpreted? Is the test still worthwhile given the risks?"}, {"text": "Testing and Test Control Notation (TTCN), both TTCN-2 and TTCN-3, follows actor model rather closely. In TTCN actor is a test component: either parallel test component (PTC) or main test component (MTC). Test components can send and receive messages to and from remote partners (peer test components or test system interface), the latter being identified by its address."}, {"text": "Statistical and graphical tests are used to evaluate the correspondence of data with the model. Certain tests are global, while others focus on specific items or people. Certain tests of fit provide information about which items can be used to increase the reliability of a test by omitting or correcting problems with poor items."}, {"text": "The desired level of statistical confidence also plays a role in reliability testing. Statistical confidence is increased by increasing either the test time or the number of items tested. Reliability test plans are designed to achieve the specified reliability at the specified confidence level with the minimum number of test units and test time."}]}, {"question": "How do you describe bimodal distribution", "positive_ctxs": [{"text": "Bimodal Distribution: Two Peaks. The bimodal distribution has two peaks. However, if you think about it, the peaks in any distribution are the most common number(s). The two peaks in a bimodal distribution also represent two local maximums; these are points where the data points stop increasing and start decreasing."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "In statistics, a bimodal distribution is a probability distribution with two different modes, which may also be referred to as a bimodal distribution. These appear as distinct peaks (local maxima) in the probability density function, as shown in Figures 1 and 2. Categorical, continuous, and discrete data can all form bimodal distributions."}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "correspond to the transmission associated with each eigenchannel. One of the remarkable properties of diffusive systems is their bimodal eigenvalue distribution with"}, {"text": "The logic behind this coefficient is that a bimodal distribution with light tails will have very low kurtosis, an asymmetric character, or both \u2013 all of which increase this coefficient."}, {"text": "A bimodal distribution most commonly arises as a mixture of two different unimodal distributions (i.e. distributions having only one mode). In other words, the bimodally distributed random variable X is defined as"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "What happens if the sample size increases", "positive_ctxs": [{"text": "The central limit theorem states that the sampling distribution of the mean approaches a normal distribution, as the sample size increases. Therefore, as a sample size increases, the sample mean and standard deviation will be closer in value to the population mean \u03bc and standard deviation \u03c3 ."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "What happens if the person's address as stored in the database is incorrect? Suppose an official accidentally entered the wrong address or date? Or, suppose the person lied about their address for some reason."}, {"text": "This means that for a given effect size, the significance level increases with the sample size. Unlike the t-test statistic, the effect size aims to estimate a population parameter and is not affected by the sample size."}, {"text": "A consistent sequence of estimators is a sequence of estimators that converge in probability to the quantity being estimated as the index (usually the sample size) grows without bound. In other words, increasing the sample size increases the probability of the estimator being close to the population parameter."}, {"text": "The expected value of the median falls slightly as sample size increases while, as would be expected, the standard errors of both the median and the mean are proportionate to the inverse square root of the sample size. The asymptotic approximation errs on the side of caution by overestimating the standard error."}, {"text": "A measurement system can be accurate but not precise, precise but not accurate, neither, or both. For example, if an experiment contains a systematic error, then increasing the sample size generally increases precision but does not improve accuracy. The result would be a consistent yet inaccurate string of results from the flawed experiment."}, {"text": "A measurement system can be accurate but not precise, precise but not accurate, neither, or both. For example, if an experiment contains a systematic error, then increasing the sample size generally increases precision but does not improve accuracy. The result would be a consistent yet inaccurate string of results from the flawed experiment."}]}, {"question": "How do you do linear discriminant analysis", "positive_ctxs": [{"text": "5:1515:11Suggested clip \u00b7 109 secondsStatQuest: Linear Discriminant Analysis (LDA) clearly explained YouTubeStart of suggested clipEnd of suggested clip"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}, {"text": "Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification."}]}, {"question": "How do you evaluate a rank algorithm", "positive_ctxs": [{"text": "1 Answer. Normalized discounted cumulative gain is one of the standard method of evaluating ranking algorithms. You will need to provide a score to each of the recommendations that you give. If your algorithm assigns a low (better) rank to a high scoring entity, your NDCG score will be higher, and vice versa."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "Imagine you have a cluster of news articles on a particular event, and you want to produce one summary. Each article is likely to have many similar sentences, and you would only want to include distinct ideas in the summary. To address this issue, LexRank applies a heuristic post-processing step that builds up a summary by adding sentences in rank order, but discards any sentences that are too similar to ones already placed in the summary."}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "What is linear activation function in neural network", "positive_ctxs": [{"text": "Linear Activation Function A linear activation function takes the form: A = cx. It takes the inputs, multiplied by the weights for each neuron, and creates an output signal proportional to the input. In one sense, a linear function is better than a step function because it allows multiple outputs, not just yes and no."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "When the activation function is non-linear, then a two-layer neural network can be proven to be a universal function approximator. This is known as the Universal Approximation Theorem. The identity activation function does not satisfy this property."}, {"text": "In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control."}, {"text": "The rectifier is, as of 2017, the most popular activation function for deep neural networks.A unit employing the rectifier is also called a rectified linear unit (ReLU).Rectified linear units find applications in computer vision and speech recognition using deep neural nets and computational neuroscience."}, {"text": "The rectifier is, as of 2017, the most popular activation function for deep neural networks.A unit employing the rectifier is also called a rectified linear unit (ReLU).Rectified linear units find applications in computer vision and speech recognition using deep neural nets and computational neuroscience."}, {"text": "The softmax function, also known as softargmax or normalized exponential function, is a generalization of the logistic function to multiple dimensions. It is used in multinomial logistic regression and is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom."}, {"text": "The softmax function, also known as softargmax or normalized exponential function, is a generalization of the logistic function to multiple dimensions. It is used in multinomial logistic regression and is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom."}, {"text": "The most basic model of a neuron consists of an input with some synaptic weight vector and an activation function or transfer function inside the neuron determining output. This is the basic structure used for artificial neurons, which in a neural network often looks like"}]}, {"question": "What is optimal policy in reinforcement learning", "positive_ctxs": [{"text": "\u23e9 optimal policy: the best action to take at each state, for maximum rewards over time. To help our agent do this, we need two things: A way to determine the value of a state in MDP. An estimated value of an action taken at a particular state."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A policy that achieves these optimal values in each state is called optimal. Clearly, a policy that is optimal in this strong sense is also optimal in the sense that it maximizes the expected return"}, {"text": "A policy that achieves these optimal values in each state is called optimal. Clearly, a policy that is optimal in this strong sense is also optimal in the sense that it maximizes the expected return"}, {"text": "Due to its generality, reinforcement learning is studied in many disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, and statistics. In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment."}, {"text": "Due to its generality, reinforcement learning is studied in many disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, and statistics. In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment."}, {"text": "These methods rely on the theory of MDPs, where optimality is defined in a sense that is stronger than the above one: A policy is called optimal if it achieves the best expected return from any initial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can always be found amongst stationary policies."}, {"text": "These methods rely on the theory of MDPs, where optimality is defined in a sense that is stronger than the above one: A policy is called optimal if it achieves the best expected return from any initial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can always be found amongst stationary policies."}, {"text": "When the scheduling policy is dynamic in the sense that it can make adjustments during the process based on up-to-date information, posterior Gittins index is developed to find the optimal policy that minimizes the expected discounted reward in the class of dynamic policies."}]}, {"question": "What is the difference between logistic regression and classification", "positive_ctxs": [{"text": "Classification is a machine learning concept. It is used for categorical dependent variables, where we need to classify into required groups. Logistic regression is a algorithm within classification."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "From this perspective, SVM is closely related to other fundamental classification algorithms such as regularized least-squares and logistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with the square-loss,"}, {"text": "From this perspective, SVM is closely related to other fundamental classification algorithms such as regularized least-squares and logistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with the square-loss,"}, {"text": "From this perspective, SVM is closely related to other fundamental classification algorithms such as regularized least-squares and logistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with the square-loss,"}, {"text": "From this perspective, SVM is closely related to other fundamental classification algorithms such as regularized least-squares and logistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with the square-loss,"}, {"text": "From this perspective, SVM is closely related to other fundamental classification algorithms such as regularized least-squares and logistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with the square-loss,"}, {"text": "This formulation\u2014which is standard in discrete choice models\u2014makes clear the relationship between logistic regression (the \"logit model\") and the probit model, which uses an error variable distributed according to a standard normal distribution instead of a standard logistic distribution. Both the logistic and normal distributions are symmetric with a basic unimodal, \"bell curve\" shape. The only difference is that the logistic distribution has somewhat heavier tails, which means that it is less sensitive to outlying data (and hence somewhat more robust to model mis-specifications or erroneous data)."}, {"text": "This formulation\u2014which is standard in discrete choice models\u2014makes clear the relationship between logistic regression (the \"logit model\") and the probit model, which uses an error variable distributed according to a standard normal distribution instead of a standard logistic distribution. Both the logistic and normal distributions are symmetric with a basic unimodal, \"bell curve\" shape. The only difference is that the logistic distribution has somewhat heavier tails, which means that it is less sensitive to outlying data (and hence somewhat more robust to model mis-specifications or erroneous data)."}]}, {"question": "Can you Factorise matrices", "positive_ctxs": [{"text": "In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of unrelated problems?"}, {"text": "Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of unrelated problems?"}, {"text": "Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of unrelated problems?"}, {"text": "And you just have to have somebody close to the power cord. Right when you see it about to happen, you gotta yank that electricity out of the wall, man."}, {"text": "Since the gain matrices depend only on the model, and not the measurements, they may be computed offline. Convergence of the gain matrices"}, {"text": "Can the algorithms be improved? : Once the programmer judges a program \"fit\" and \"effective\"\u2014that is, it computes the function intended by its author\u2014then the question becomes, can it be improved?"}, {"text": "Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word \"eigenvector\" in the context of matrices almost always refers to a right eigenvector, namely a column vector that right multiplies the"}]}, {"question": "What is Histogram of Oriented Gradients and how does it work", "positive_ctxs": [{"text": "The histogram of oriented gradients (HOG) is a feature descriptor used in computer vision and image processing for the purpose of object detection. The technique counts occurrences of gradient orientation in localized portions of an image."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Knabe, J. F., Nehaniv, C. L. and Schilstra, M. J. \"Evolution and Morphogenesis of Differentiated Multicellular Organisms: Autonomously Generated Diffusion Gradients for Positional Information\". In Artificial Life XI: Proceedings of the Eleventh International Conference on the Simulation and Synthesis of Living Systems, pages 321-328, MIT Press, 2008. corr."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "What emerges then is that info-gap theory is yet to explain in what way, if any, it actually attempts to deal with the severity of the uncertainty under consideration. Subsequent sections of this article will address this severity issue and its methodological and practical implications."}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}, {"text": "What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1."}]}, {"question": "Why do we use t distribution", "positive_ctxs": [{"text": "The t\u2010distribution is used as an alternative to the normal distribution when sample sizes are small in order to estimate confidence or determine critical values that an observation is a given distance from the mean."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "where t is the t-statistic with n \u2212 1 degrees of freedom. Hence we may use the known exact distribution of tn\u22121 to draw inferences."}, {"text": "By choosing to use the distribution with the maximum entropy allowed by our information, the argument goes, we are choosing the most uninformative distribution possible. To choose a distribution with lower entropy would be to assume information we do not possess. Thus the maximum entropy distribution is the only reasonable distribution."}, {"text": "By choosing to use the distribution with the maximum entropy allowed by our information, the argument goes, we are choosing the most uninformative distribution possible. To choose a distribution with lower entropy would be to assume information we do not possess. Thus the maximum entropy distribution is the only reasonable distribution."}, {"text": "for t from 0 to n \u2212 k do // t is time. n is the length of the training sequence"}, {"text": "has a Student's t distribution with n \u2212 1 degrees of freedom. Note that the distribution of T does not depend on the values of the unobservable parameters \u03bc and \u03c32; i.e., it is a pivotal quantity. Suppose we wanted to calculate a 95% confidence interval for \u03bc."}, {"text": "has a Student's t distribution with n \u2212 1 degrees of freedom. Note that the distribution of T does not depend on the values of the unobservable parameters \u03bc and \u03c32; i.e., it is a pivotal quantity. Suppose we wanted to calculate a 95% confidence interval for \u03bc."}, {"text": "These metaphors are prevalent in communication and we do not just use them in language; we actually perceive and act in accordance with the metaphors."}]}, {"question": "When would you use a mixed model", "positive_ctxs": [{"text": "Mixed effects models are useful when we have data with more than one source of random variability. For example, an outcome may be measured more than once on the same person (repeated measures taken over time). When we do that we have to account for both within-person and across-person variability."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "As an example, suppose a linear prediction model learns from some data (perhaps primarily drawn from large beaches) that a 10 degree temperature decrease would lead to 1,000 fewer people visiting the beach. This model is unlikely to generalize well over different sized beaches. More specifically, the problem is that if you use the model to predict the new attendance with a temperature drop of 10 for a beach that regularly receives 50 beachgoers, you would predict an impossible attendance value of \u2212950."}, {"text": "As an example, suppose a linear prediction model learns from some data (perhaps primarily drawn from large beaches) that a 10 degree temperature decrease would lead to 1,000 fewer people visiting the beach. This model is unlikely to generalize well over different sized beaches. More specifically, the problem is that if you use the model to predict the new attendance with a temperature drop of 10 for a beach that regularly receives 50 beachgoers, you would predict an impossible attendance value of \u2212950."}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "A mixed model, mixed-effects model or mixed error-component model is a statistical model containing both fixed effects and random effects. These models are useful in a wide variety of disciplines in the physical, biological and social sciences."}, {"text": "\"Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday \u2013 and thus be able to change, say, Saturday's model before Saturday arrives."}, {"text": "\"Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday \u2013 and thus be able to change, say, Saturday's model before Saturday arrives."}]}, {"question": "What is inverted dropout technique", "positive_ctxs": [{"text": "Inverted dropout is a variant of the original dropout technique developed by Hinton et al. The one difference is that, during the training of a neural network, inverted dropout scales the activations by the inverse of the keep probability q=1\u2212p q = 1 \u2212 p ."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "The inverted index data structure is a central component of a typical search engine indexing algorithm. A goal of a search engine implementation is to optimize the speed of the query: find the documents where word X occurs. Once a forward index is developed, which stores lists of words per document, it is next inverted to develop an inverted index."}, {"text": "Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity."}, {"text": "In computer science, an inverted index (also referred to as a postings file or inverted file) is a database index storing a mapping from content, such as words or numbers, to its locations in a table, or in a document or a set of documents (named in contrast to a forward index, which maps from documents to content). The purpose of an inverted index is to allow fast full-text searches, at a cost of increased processing when a document is added to the database. The inverted file may be the database file itself, rather than its index."}, {"text": "There are two main variants of inverted indexes: A record-level inverted index (or inverted file index or just inverted file) contains a list of references to documents for each word. A word-level inverted index (or full inverted index or inverted list) additionally contains the positions of each word within a document. The latter form offers more functionality (like phrase searches), but needs more processing power and space to be created."}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}]}, {"question": "What is a particle filter used for", "positive_ctxs": [{"text": "Particle filters or Sequential Monte Carlo (SMC) methods are a set of Monte Carlo algorithms used to solve filtering problems arising in signal processing and Bayesian statistical inference. Particle filters update their prediction in an approximate (statistical) manner."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The ensemble Kalman filter (EnKF) is a recursive filter suitable for problems with a large number of variables, such as discretizations of partial differential equations in geophysical models. The EnKF originated as a version of the Kalman filter for large problems (essentially, the covariance matrix is replaced by the sample covariance), and it is now an important data assimilation component of ensemble forecasting. EnKF is related to the particle filter (in this context, a particle is the same thing as ensemble member) but the EnKF makes the assumption that all probability distributions involved are Gaussian; when it is applicable, it is much more efficient than the particle filter."}, {"text": "Monte Carlo localization (MCL), also known as particle filter localization, is an algorithm for robots to localize using a particle filter. Given a map of the environment, the algorithm estimates the position and orientation of a robot as it moves and senses the environment. The algorithm uses a particle filter to represent the distribution of likely states, with each particle representing a possible state, i.e., a hypothesis of where the robot is."}, {"text": "The objective of a particle filter is to estimate the posterior density of the state variables given the observation variables. The particle filter is designed for a hidden Markov Model, where the system consists of both hidden and observable variables. The observable variables (observation process) are related to the hidden variables (state-process) by some functional form that is known."}, {"text": "Related filters attempting to relax the Gaussian assumption in EnKF while preserving its advantages include filters that fit the state pdf with multiple Gaussian kernels, filters that approximate the state pdf by Gaussian mixtures, a variant of the particle filter with computation of particle weights by density estimation, and a variant of the particle filter with thick tailed data pdf to alleviate particle filter degeneracy."}, {"text": "The particle filter central to MCL can approximate multiple different kinds of probability distributions, since it is a non-parametric representation. Some other Bayesian localization algorithms, such as the Kalman filter (and variants, the extended Kalman filter and the unscented Kalman filter), assume the belief of the robot is close to being a Gaussian distribution and do not perform well for situations where the belief is multimodal. For example, a robot in a long corridor with many similar-looking doors may arrive at a belief that has a peak for each door, but the robot is unable to distinguish which door it is at."}, {"text": "In autonomous robotics, Monte Carlo localization can determine the position of a robot. It is often applied to stochastic filters such as the Kalman filter or particle filter that forms the heart of the SLAM (simultaneous localization and mapping) algorithm."}, {"text": "In autonomous robotics, Monte Carlo localization can determine the position of a robot. It is often applied to stochastic filters such as the Kalman filter or particle filter that forms the heart of the SLAM (simultaneous localization and mapping) algorithm."}]}, {"question": "What do you understand by bias variance trade off", "positive_ctxs": [{"text": "Bias is the simplifying assumptions made by the model to make the target function easier to approximate. Variance is the amount that the estimate of the target function will change given different training data. Trade-off is tension between the error introduced by the bias and the variance."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "are the k nearest neighbors of x in the training set. The bias (first term) is a monotone rising function of k, while the variance (second term) drops off as k is increased. In fact, under \"reasonable assumptions\" the bias of the first-nearest neighbor (1-NN) estimator vanishes entirely as the size of the training set approaches infinity."}, {"text": "What is more there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.A simple example of modus ponens often used in introductory logic books is \"If you are human then you are mortal\". This can be represented in pseudocode as:"}, {"text": "ANCOVA can be used to increase statistical power (the probability a significant difference is found between groups when one exists) by reducing the within-group error variance. In order to understand this, it is necessary to understand the test used to evaluate differences between groups, the F-test. The F-test is computed by dividing the explained variance between groups (e.g., medical recovery differences) by the unexplained variance within the groups."}, {"text": "ANCOVA can be used to increase statistical power (the probability a significant difference is found between groups when one exists) by reducing the within-group error variance. In order to understand this, it is necessary to understand the test used to evaluate differences between groups, the F-test. The F-test is computed by dividing the explained variance between groups (e.g., medical recovery differences) by the unexplained variance within the groups."}, {"text": "What constitutes narrow or wide limits of agreement or large or small bias is a matter of a practical assessment in each case."}, {"text": "You are allowed to select k of these n boxes all at once and break them open simultaneously, gaining access to k keys. What is the probability that using these keys you can open all n boxes, where you use a found key to open the box it belongs to and repeat."}]}, {"question": "How does batch gradient descent work", "positive_ctxs": [{"text": "Batch gradient descent is a variation of the gradient descent algorithm that calculates the error for each example in the training dataset, but only updates the model after all training examples have been evaluated. One cycle through the entire training dataset is called a training epoch."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Consequently, the hinge loss function cannot be used with gradient descent methods or stochastic gradient descent methods which rely on differentiability over the entire domain. However, the hinge loss does have a subgradient at"}, {"text": "Recently, some scholars have argued that batch normalization does not reduce internal covariate shift, but rather smooths the objective function, which in turn improves the performance. However, at initialization, batch normalization in fact induces severe gradient explosion in deep networks, which is only alleviated by skip connections in residual networks. Others sustain that batch normalization achieves length-direction decoupling, and thereby accelerates neural networks.After batch norm, many other in-layer normalization methods have been introduced, such as instance normalization, layer normalization, group normalization."}, {"text": "Stochastic learning introduces \"noise\" into the process, using the local gradient calculated from one data point; this reduces the chance of the network getting stuck in local minima. However, batch learning typically yields a faster, more stable descent to a local minimum, since each update is performed in the direction of the batch's average error. A common compromise is to use \"mini-batches\", small batches with samples in each batch selected stochastically from the entire data set."}, {"text": "Stochastic learning introduces \"noise\" into the process, using the local gradient calculated from one data point; this reduces the chance of the network getting stuck in local minima. However, batch learning typically yields a faster, more stable descent to a local minimum, since each update is performed in the direction of the batch's average error. A common compromise is to use \"mini-batches\", small batches with samples in each batch selected stochastically from the entire data set."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}, {"text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. It can be regarded as a stochastic approximation of gradient descent optimization."}]}, {"question": "What is difference between linear filtersand nonlinear filters", "positive_ctxs": [{"text": "Linear filtering is the filtering method in which the value of output pixel is linear combinations of the neighbouring input pixels. it can be done with convolution. For examples, mean/average filters or Gaussian filtering. A non-linear filtering is one that cannot be done with convolution or Fourier multiplication."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "From the foregoing, we can know that the nonlinear filters have quite different behavior compared to linear filters. The most important characteristic is that, for nonlinear filters, the filter output or response of the filter does not obey the principles outlined earlier, particularly scaling and shift invariance. Furthermore, a nonlinear filter can produce results that vary in a non-intuitive manner."}, {"text": "However, nonlinear filters are considerably harder to use and design than linear ones, because the most powerful mathematical tools of signal analysis (such as the impulse response and the frequency response) cannot be used on them. Thus, for example, linear filters are often used to remove noise and distortion that was created by nonlinear processes, simply because the proper non-linear filter would be too hard to design and construct."}, {"text": "Particle filters are also an approximation, but with enough particles they can be much more accurate. The nonlinear filtering equation is given by the recursion"}, {"text": "Time-dependent input is transformed by complex linear and nonlinear filters into a spike train in the output. Again, the spike response model or the adaptive integrate-and-fire model enable to predict the spike train in the output for arbitrary time-dependent input, whereas an artificial neuron or a simple leaky integrate-and-fire does not."}, {"text": "Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ? ( #5) \u2013 Finale, summing up, and my own view"}, {"text": "An important difference between these two rules is that a forecaster should strive to maximize the quadratic score yet minimize the Brier score. This is due to a negative sign in the linear transformation between them."}, {"text": "Least-squares problems fall into two categories: linear or ordinary least squares and nonlinear least squares, depending on whether or not the residuals are linear in all unknowns. The linear least-squares problem occurs in statistical regression analysis; it has a closed-form solution. The nonlinear problem is usually solved by iterative refinement; at each iteration the system is approximated by a linear one, and thus the core calculation is similar in both cases."}]}, {"question": "How do you prepare linear algebra", "positive_ctxs": [{"text": "For linear algebra, it's very helpful to prepare by doing simple practice problems with the basic axioms of vector spaces and inner products. I was always mediocre at algebra, but good at visualizing 2D and 3D things."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "is called a system of linear equations or a linear system.Systems of linear equations form a fundamental part of linear algebra. Historically, linear algebra and matrix theory has been developed for solving such systems. In the modern presentation of linear algebra through vector spaces and matrices, many problems may be interpreted in terms of linear systems."}, {"text": "and their representations in vector spaces and through matrices.Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis, a branch of mathematical analysis, may be viewed as basically the application of linear algebra to spaces of functions."}, {"text": "Nearly all scientific computations involve linear algebra. Consequently, linear algebra algorithms have been highly optimized. BLAS and LAPACK are the best known implementations."}, {"text": "Developments of the theory of linear models have encompassed and surpassed the cases that concerned early writers. Today, the theory rests on advanced topics in linear algebra, algebra and combinatorics."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "How do you find joint probability", "positive_ctxs": [{"text": "Joint probability is calculated by multiplying the probability of event A, expressed as P(A), by the probability of event B, expressed as P(B). For example, suppose a statistician wishes to know the probability that the number five will occur twice when two dice are rolled at the same time."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, \"What do you find unpleasant about noisy parties?\" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, \"What would you do to liven up a dull party?\""}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "To find the joint probability distribution, more data is required. For example, suppose P(L = red) = 0.2, P(L = yellow) = 0.1, and P(L = green) = 0.7. Multiplying each column in the conditional distribution by the probability of that column occurring results in the joint probability distribution of H and L, given in the central 2\u00d73 block of entries."}, {"text": "To find the joint probability distribution, more data is required. For example, suppose P(L = red) = 0.2, P(L = yellow) = 0.1, and P(L = green) = 0.7. Multiplying each column in the conditional distribution by the probability of that column occurring results in the joint probability distribution of H and L, given in the central 2\u00d73 block of entries."}, {"text": "To find the joint probability distribution, more data is required. For example, suppose P(L = red) = 0.2, P(L = yellow) = 0.1, and P(L = green) = 0.7. Multiplying each column in the conditional distribution by the probability of that column occurring results in the joint probability distribution of H and L, given in the central 2\u00d73 block of entries."}]}, {"question": "What is the meaning of variance", "positive_ctxs": [{"text": "Variance (\u03c32) in statistics is a measurement of the spread between numbers in a data set. That is, it measures how far each number in the set is from the mean and therefore from every other number in the set."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In this way, an interpretation provides semantic meaning to the terms, the predicates, and formulas of the language. The study of the interpretations of formal languages is called formal semantics. What follows is a description of the standard or Tarskian semantics for first-order logic."}, {"text": "\"In PCA, 1.00s are put in the diagonal meaning that all of the variance in the matrix is to be accounted for (including variance unique to each variable, variance common among variables, and error variance). That would, therefore, by definition, include all of the variance in the variables. In contrast, in EFA, the communalities are put in the diagonal meaning that only the variance shared with other variables is to be accounted for (excluding variance unique to each variable and error variance)."}, {"text": "Introduced in CART, variance reduction is often employed in cases where the target variable is continuous (regression tree), meaning that use of many other metrics would first require discretization before being applied. The variance reduction of a node N is defined as the total reduction of the variance of the target variable Y due to the split at this node:"}, {"text": "Introduced in CART, variance reduction is often employed in cases where the target variable is continuous (regression tree), meaning that use of many other metrics would first require discretization before being applied. The variance reduction of a node N is defined as the total reduction of the variance of the target variable Y due to the split at this node:"}, {"text": "What is the epistemological status of the laws of logic? What sort of argument is appropriate for criticizing purported principles of logic? In an influential paper entitled \"Is Logic Empirical?\""}, {"text": "A related effect size is r2, the coefficient of determination (also referred to as R2 or \"r-squared\"), calculated as the square of the Pearson correlation r. In the case of paired data, this is a measure of the proportion of variance shared by the two variables, and varies from 0 to 1. For example, with an r of 0.21 the coefficient of determination is 0.0441, meaning that 4.4% of the variance of either variable is shared with the other variable. The r2 is always positive, so does not convey the direction of the correlation between the two variables."}, {"text": "One of the assumptions of the classical linear regression model is that there is no heteroscedasticity. Breaking this assumption means that the Gauss\u2013Markov theorem does not apply, meaning that OLS estimators are not the Best Linear Unbiased Estimators (BLUE) and their variance is not the lowest of all other unbiased estimators."}]}, {"question": "How do coreference resolution anaphora resolution algorithms work", "positive_ctxs": [{"text": "3.1. Coreference resolution (or anaphora) is an expression, the interpretation of which depends on another word or phrase presented earlier in the text (antecedent). For example, \u201cTom has a backache. He was injured.\u201d Here the words \u201cTom\u201d and \u201cHe\u201d refer to the same entity."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Cluster quality metrics commonly used to evaluate coreference resolution algorithms are Rand index, adjusted Rand index or different mutual information-based methods."}, {"text": "It takes a lot of work to succeed.d. Sometimes it's the loudest who have the most influence.Pleonastic uses are not considered referential, and so are not part of coreference.Approaches to coreference resolution can broadly be separated into mention-pair, mention-ranking or entity-based algorithms. Mention-pair algorithms involve binary decisions if a pair of two given mentions belong to the same entity."}, {"text": "Given a sentence or larger chunk of text, determine which words (\"mentions\") refer to the same objects (\"entities\"). Anaphora resolution is a specific example of this task, and is specifically concerned with matching up pronouns with the nouns or names to which they refer. The more general task of coreference resolution also includes identifying so-called \"bridging relationships\" involving referring expressions."}, {"text": "Given a sentence or larger chunk of text, determine which words (\"mentions\") refer to the same objects (\"entities\"). Anaphora resolution is a specific example of this task, and is specifically concerned with matching up pronouns with the nouns or names to which they refer. The more general task of coreference resolution also includes identifying so-called \"bridging relationships\" involving referring expressions."}, {"text": "Given a sentence or larger chunk of text, determine which words (\"mentions\") refer to the same objects (\"entities\"). Anaphora resolution is a specific example of this task, and is specifically concerned with matching up pronouns with the nouns or names to which they refer. The more general task of coreference resolution also includes identifying so-called \"bridging relationships\" involving referring expressions."}, {"text": "Given a sentence or larger chunk of text, determine which words (\"mentions\") refer to the same objects (\"entities\"). Anaphora resolution is a specific example of this task, and is specifically concerned with matching up pronouns with the nouns or names to which they refer. The more general task of coreference resolution also includes identifying so-called \"bridging relationships\" involving referring expressions."}, {"text": "Given a sentence or larger chunk of text, determine which words (\"mentions\") refer to the same objects (\"entities\"). Anaphora resolution is a specific example of this task, and is specifically concerned with matching up pronouns with the nouns or names to which they refer. The more general task of coreference resolution also includes identifying so-called \"bridging relationships\" involving referring expressions."}]}, {"question": "What is meant by multinomial logistic regression", "positive_ctxs": [{"text": "Multinomial logistic regression is used to predict categorical placement in or the probability of category membership on a dependent variable based on multiple independent variables. The independent variables can be either dichotomous (i.e., binary) or continuous (i.e., interval or ratio in scale)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "It is also possible to formulate multinomial logistic regression as a latent variable model, following the two-way latent variable model described for binary logistic regression. This formulation is common in the theory of discrete choice models, and makes it easier to compare multinomial logistic regression to the related multinomial probit model, as well as to extend it to more complex models."}, {"text": "It is also possible to formulate multinomial logistic regression as a latent variable model, following the two-way latent variable model described for binary logistic regression. This formulation is common in the theory of discrete choice models, and makes it easier to compare multinomial logistic regression to the related multinomial probit model, as well as to extend it to more complex models."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "Maximum entropy classifier (aka logistic regression, multinomial logistic regression): Note that logistic regression is an algorithm for classification, despite its name. (The name comes from the fact that logistic regression uses an extension of a linear regression model to model the probability of an input being in a particular class."}, {"text": "In such a situation, ordinary least squares (the basic regression technique) is widely seen as inadequate; instead probit regression or logistic regression is used. Further, sometimes there are three or more categories for the dependent variable \u2014 for example, no charges, charges, and death sentences. In this case, the multinomial probit or multinomial logit technique is used."}, {"text": "Logistic regression and other log-linear models are also commonly used in machine learning. A generalisation of the logistic function to multiple inputs is the softmax activation function, used in multinomial logistic regression."}, {"text": "Logistic regression and other log-linear models are also commonly used in machine learning. A generalisation of the logistic function to multiple inputs is the softmax activation function, used in multinomial logistic regression."}]}, {"question": "What is marginal probability in statistics", "positive_ctxs": [{"text": "Marginal probability: the probability of an event occurring (p(A)), it may be thought of as an unconditional probability. It is not conditioned on another event. Example: the probability that a card drawn is red (p(red) = 0.5)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "Traditionally, statistics was concerned with drawing inferences using a semi-standardized methodology that was \"required learning\" in most sciences. This tradition has changed with the use of statistics in non-inferential contexts. What was once considered a dry subject, taken in many fields as a degree-requirement, is now viewed enthusiastically."}, {"text": "If more than one random variable is defined in a random experiment, it is important to distinguish between the joint probability distribution of X and Y and the probability distribution of each variable individually. The individual probability distribution of a random variable is referred to as its marginal probability distribution. In general, the marginal probability distribution of X can be determined from the joint probability distribution of X and other random variables."}, {"text": "Statistical MDL learning is very strongly connected to probability theory and statistics through the correspondence between codes and probability distributions mentioned above. This has led some researchers to view MDL as equivalent to Bayesian inference: code length of model and data together in MDL correspond respectively to prior probability and marginal likelihood in the Bayesian framework.While Bayesian machinery is often useful in constructing efficient MDL codes, the MDL framework also accommodates other codes that are not Bayesian. An example is the Shtarkov normalized maximum likelihood code, which plays a central role in current MDL theory, but has no equivalent in Bayesian inference."}]}, {"question": "How does Dbscan algorithm work", "positive_ctxs": [{"text": "DBSCAN works as such: Divides the dataset into n dimensions. For each point in the dataset, DBSCAN forms an n dimensional shape around that data point, and then counts how many data points fall within that shape. DBSCAN counts this shape as a cluster."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "How much does the ball cost?\" many subjects incorrectly answer $0.10. An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do."}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}, {"text": "The book How to Lie with Statistics is the most popular book on statistics ever published. It does not much consider hypothesis"}]}, {"question": "How do you predict regression", "positive_ctxs": [{"text": "The general procedure for using regression to make good predictions is the following:Research the subject-area so you can build on the work of others. Collect data for the relevant variables.Specify and assess your regression model.If you have a model that adequately fits the data, use it to make predictions."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Computational neuroscience aims to address a wide array of questions. How do axons and dendrites form during development? How do axons know where to target and how to reach these targets?"}, {"text": "\"Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday \u2013 and thus be able to change, say, Saturday's model before Saturday arrives."}, {"text": "\"Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday \u2013 and thus be able to change, say, Saturday's model before Saturday arrives."}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}, {"text": "Given a query, view this as a mini document, and compare it to your documents in the low-dimensional space.To do the latter, you must first translate your query into the low-dimensional space. It is then intuitive that you must use the same transformation that you use on your documents:"}]}, {"question": "What is Q function explain Q learning with suitable example", "positive_ctxs": [{"text": "Q-Learning is a value-based reinforcement learning algorithm which is used to find the optimal action-selection policy using a Q function. Our goal is to maximize the value function Q. The Q table helps us to find the best action for each state. Initially we explore the environment and update the Q-Table."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Select a random subset Q of [n] containing m elements and a random permutation, and ask about the probability that all elements of Q lie on the same cycle. This is another average parameter. The function b(k) is equal to"}, {"text": "The form shows that inference from P implies Q to the negation of Q implies the negation of P is a valid argument."}, {"text": "Proof: The integers Z are countable because the function f : Z \u2192 N given by f(n) = 2n if n is non-negative and f(n) = 3\u2212 n if n is negative, is an injective function. The rational numbers Q are countable because the function g : Z \u00d7 N \u2192 Q given by g(m, n) = m/(n + 1) is a surjection from the countable set Z \u00d7 N to the rationals Q."}, {"text": "The DeepMind system used a deep convolutional neural network, with layers of tiled convolutional filters to mimic the effects of receptive fields. Reinforcement learning is unstable or divergent when a nonlinear function approximator such as a neural network is used to represent Q. This instability comes from the correlations present in the sequence of observations, the fact that small updates to Q may significantly change the policy and the data distribution, and the correlations between Q and the target values."}, {"text": "In statistics, in the analysis of two-way randomized block designs where the response variable can take only two possible outcomes (coded as 0 and 1), Cochran's Q test is a non-parametric statistical test to verify whether k treatments have identical effects. It is named after William Gemmell Cochran. Cochran's Q test should not be confused with Cochran's C test, which is a variance outlier test."}, {"text": "where In is the identity matrix of size n, and 0n,n is the zero matrix of size n\u00d7n. Multiplying together stochastic matrices always yields another stochastic matrix, so Q must be a stochastic matrix (see the definition above). It is sometimes sufficient to use the matrix equation above and the fact that Q is a stochastic matrix to solve for Q."}, {"text": "(S may be periodic, even if Q is not. Once \u03c0 is found, it must be normalized to a unit vector.)"}]}, {"question": "What is a class in decision tree learning", "positive_ctxs": [{"text": "A decision tree is a simple representation for classifying examples. For this section, assume that all of the input features have finite discrete domains, and there is a single target feature called the \"classification\". Each element of the domain of the classification is called a class."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "A decision tree or a classification tree is a tree in which each internal (non-leaf) node is labeled with an input feature. The arcs coming from a node labeled with an input feature are labeled with each of the possible values of the target feature or the arc leads to a subordinate decision node on a different input feature. Each leaf of the tree is labeled with a class or a probability distribution over the classes, signifying that the data set has been classified by the tree into either a specific class, or into a particular probability distribution (which, if the decision tree is well-constructed, is skewed towards certain subsets of classes)."}, {"text": "A decision tree or a classification tree is a tree in which each internal (non-leaf) node is labeled with an input feature. The arcs coming from a node labeled with an input feature are labeled with each of the possible values of the target feature or the arc leads to a subordinate decision node on a different input feature. Each leaf of the tree is labeled with a class or a probability distribution over the classes, signifying that the data set has been classified by the tree into either a specific class, or into a particular probability distribution (which, if the decision tree is well-constructed, is skewed towards certain subsets of classes)."}, {"text": "Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels."}, {"text": "Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels."}, {"text": "Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels."}, {"text": "Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels."}, {"text": "Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels."}]}, {"question": "Is colour a qualitative variable", "positive_ctxs": [{"text": "A qualitative variable, also called a categorical variable, is a variable that isn't numerical. It describes data that fits into categories. For example: Eye colors (variables include: blue, green, brown, hazel)."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The data that all share a qualitative property form a nominal category. A variable which codes for the presence or absence of such a property is called a binary categorical variable, or equivalently a dummy variable."}, {"text": "Therefore, the HSL and HSV colour models are more often used; note that since hue is a circular quantity it requires circular thresholding. It is also possible to use the CMYK colour model (Pham et al., 2007)."}, {"text": "A model with a dummy dependent variable (also known as a qualitative dependent variable) is one in which the dependent variable, as influenced by the explanatory variables, is qualitative in nature. Some decisions regarding 'how much' of an act must be performed involve a prior decision making on whether to perform the act or not. For example, the amount of output to produce, the cost to be incurred, etc."}, {"text": "A model with a dummy dependent variable (also known as a qualitative dependent variable) is one in which the dependent variable, as influenced by the explanatory variables, is qualitative in nature. Some decisions regarding 'how much' of an act must be performed involve a prior decision making on whether to perform the act or not. For example, the amount of output to produce, the cost to be incurred, etc."}, {"text": "The way a company deals with its stockholders (the 'acting' of a company) is probably the most obvious qualitative aspect of a business. Although measuring something in qualitative terms is difficult, most people can (and will) make a judgement about a behaviour on the basis of how they feel treated. This indicates that qualitative properties are closely related to emotional impressions."}, {"text": "One to two weeks after birth, the cub's skin turns grey where its hair will eventually become black. Slight pink colour may appear on the cub's fur, as a result of a chemical reaction between the fur and its mother's saliva. A month after birth, the colour pattern of the cub's fur is fully developed."}, {"text": "In statistics, a categorical variable is a variable that can take on one of a limited, and usually fixed, number of possible values, assigning each individual or other unit of observation to a particular group or nominal category on the basis of some qualitative property. In computer science and some branches of mathematics, categorical variables are referred to as enumerations or enumerated types. Commonly (though not in this article), each of the possible values of a categorical variable is referred to as a level."}]}, {"question": "What is one vs all classification in machine learning", "positive_ctxs": [{"text": "One-vs-rest (OvR for short, also referred to as One-vs-All or OvA) is a heuristic method for using binary classification algorithms for multi-class classification. It involves splitting the multi-class dataset into multiple binary classification problems."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "Preference learning is a subfield in machine learning, which is a classification method based on observed preference information. In the view of supervised learning, preference learning trains on a set of items which have preferences toward labels or other items and predicts the preferences for all items."}, {"text": "What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning."}, {"text": "This section discusses strategies for reducing the problem of multiclass classification to multiple binary classification problems. It can be categorized into one vs rest and one vs one. The techniques developed based on reducing the multi-class problem into multiple binary problems can also be called problem transformation techniques."}, {"text": "This section discusses strategies for reducing the problem of multiclass classification to multiple binary classification problems. It can be categorized into one vs rest and one vs one. The techniques developed based on reducing the multi-class problem into multiple binary problems can also be called problem transformation techniques."}]}, {"question": "What happens if two independent normal random variables are combined", "positive_ctxs": [{"text": "Any sum or difference or independent normal random variables is also normally distributed. A binomial setting arises when we perform several independent trials of the same chance process and record the number of times a particular outcome occurs."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "What happens when one number is zero, both numbers are zero? (\"Inelegant\" computes forever in all cases; \"Elegant\" computes forever when A = 0.) What happens if negative numbers are entered?"}, {"text": "In general, random variables may be uncorrelated but statistically dependent. But if a random vector has a multivariate normal distribution then any two or more of its components that are uncorrelated are independent. This implies that any two or more of its components that are pairwise independent are independent."}, {"text": "There are cases in which uncorrelatedness does imply independence. One of these cases is the one in which both random variables are two-valued (so each can be linearly transformed to have a Bernoulli distribution). Further, two jointly normally distributed random variables are independent if they are uncorrelated, although this does not hold for variables whose marginal distributions are normal and uncorrelated but whose joint distribution is not joint normal (see Normally distributed and uncorrelated does not imply independent)."}, {"text": "is pairwise independent if and only if every pair of random variables is independent. Even if the set of random variables is pairwise independent, it is not necessarily mutually independent as defined next."}, {"text": "is pairwise independent if and only if every pair of random variables is independent. Even if the set of random variables is pairwise independent, it is not necessarily mutually independent as defined next."}, {"text": "is pairwise independent if and only if every pair of random variables is independent. Even if the set of random variables is pairwise independent, it is not necessarily mutually independent as defined next."}, {"text": "must be normal deviates.This result is known as Cram\u00e9r\u2019s decomposition theorem, and is equivalent to saying that the convolution of two distributions is normal if and only if both are normal. Cram\u00e9r's theorem implies that a linear combination of independent non-Gaussian variables will never have an exactly normal distribution, although it may approach it arbitrarily closely."}]}, {"question": "How does cluster sampling work", "positive_ctxs": [{"text": "Cluster sampling refers to a type of sampling method . With cluster sampling, the researcher divides the population into separate groups, called clusters. Then, a simple random sample of clusters is selected from the population. The researcher conducts his analysis on data from the sampled clusters."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled."}, {"text": "Multistage sampling can substantially reduce sampling costs, where the complete population list would need to be constructed (before other sampling methods could be applied). By eliminating the work involved in describing clusters that are not selected, multistage sampling can reduce the large costs associated with traditional cluster sampling. However, each sample may not be a full representative of the whole population."}, {"text": "Multistage sampling can substantially reduce sampling costs, where the complete population list would need to be constructed (before other sampling methods could be applied). By eliminating the work involved in describing clusters that are not selected, multistage sampling can reduce the large costs associated with traditional cluster sampling. However, each sample may not be a full representative of the whole population."}, {"text": "An example of cluster sampling is area sampling or geographical cluster sampling. Each cluster is a geographical area. Because a geographically dispersed population can be expensive to survey, greater economy than simple random sampling can be achieved by grouping several respondents within a local area into a cluster."}, {"text": "An example of cluster sampling is area sampling or geographical cluster sampling. Each cluster is a geographical area. Because a geographically dispersed population can be expensive to survey, greater economy than simple random sampling can be achieved by grouping several respondents within a local area into a cluster."}, {"text": "The elements in each cluster are then sampled. If all elements in each sampled cluster are sampled, then this is referred to as a \"one-stage\" cluster sampling plan. If a simple random subsample of elements is selected within each of these groups, this is referred to as a \"two-stage\" cluster sampling plan."}]}, {"question": "Why does ingroup bias occur", "positive_ctxs": [{"text": "According to the realistic conflict theory, ingroup bias arises from competition for resources between groups. Since different groups are all competing for the same available resources, it serves the best interests of the group to favor members while spurning outsiders."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "For example, members of minority groups would be particularly likely to accentuate intragroup solidity through the emphasis of ingroup homogeneity. This is because minority group members, due to their minority status, are likely to experience threat to their self-esteem. This was empirically supported.Within the same tradition it was also hypothesised that an ingroup homogeneity effect would emerge on ingroup defining dimensions for both minority and majority group members."}, {"text": "When perceiving ingroup members a perceiver may experience either an intergroup context or an intragroup context. In an intergroup context the ingroup would also be predicted to be seen as comparatively homogeneous as the perceiver attends to the differences between \u201cus\u201d and \u201cthem\u201d (in other words, depersonalization occurs). However, in an intragroup context the perceiver may be motivated to attend to differences with the group (between \u201cme\u201d and \u201cothers in the group\u201d) leading to perceptions of comparative ingroup heterogeneity."}, {"text": "The self-categorization theory account is supported by evidence showing that in an intergroup context both the ingroup and outgroup will be perceived as more homogenous, while when judged in isolation the ingroup will be perceived as comparatively heterogeneous. The self-categorization theory account eliminates the need to posit differing processing mechanisms for ingroups and outgroups, as well as accounting for findings of outgroup homogeneity in the minimal group paradigm."}, {"text": "Another body of research looked at ingroup and outgroup homogeneity from the perspective of social identity theory. While complementary to the self-categorization theory account, this body of research was concerned more with specific homogeneity effects associated with the motivations of perceivers. They derived from social identity theory the prediction that comparative ingroup homogeneity will at times arise due to demands to establish a positive and distinct social identity."}, {"text": "Social psychologists have long made the distinction between ingroup favoritism and outgroup negativity, where outgroup negativity is the act of punishing or placing burdens upon the outgroup. Indeed, a significant body of research exists that attempts to identify the relationship between ingroup favoritism and outgroup negativity, as well as conditions that will lead to outgroup negativity. For example, Struch and Schwartz found support for the predictions of belief congruence theory."}, {"text": "It may occur due to several factors as outlined in Deming (1990).Non-response bias can be a problem in longitudinal research due to attrition during the study."}, {"text": "It may occur due to several factors as outlined in Deming (1990).Non-response bias can be a problem in longitudinal research due to attrition during the study."}]}, {"question": "How do you find the cosine similarity between two documents", "positive_ctxs": [{"text": "The cosine similarity is the cosine of the angle between two vectors. Figure 1 shows three 3-dimensional vectors and the angles between each pair. In text analysis, each vector can represent a document. The greater the value of \u03b8, the less the value of cos \u03b8, thus the less the similarity between two documents."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "In the case of information retrieval, the cosine similarity of two documents will range from 0 to 1, since the term frequencies (using tf\u2013idf weights) cannot be negative. The angle between two term frequency vectors cannot be greater than 90\u00b0."}, {"text": "In the case of information retrieval, the cosine similarity of two documents will range from 0 to 1, since the term frequencies (using tf\u2013idf weights) cannot be negative. The angle between two term frequency vectors cannot be greater than 90\u00b0."}, {"text": "CL-ESA exploits a document-aligned multilingual reference collection (e.g., again, Wikipedia) to represent a document as a language-independent concept vector. The relatedness of two documents in different languages is assessed by the cosine similarity between the corresponding vector representations."}, {"text": "A soft cosine or (\"soft\" similarity) between two vectors considers similarities between pairs of features. The traditional cosine similarity considers the vector space model (VSM) features as independent or completely different, while the soft cosine measure proposes considering the similarity of features in VSM, which help generalize the concept of cosine (and soft cosine) as well as the idea of (soft) similarity."}, {"text": "A soft cosine or (\"soft\" similarity) between two vectors considers similarities between pairs of features. The traditional cosine similarity considers the vector space model (VSM) features as independent or completely different, while the soft cosine measure proposes considering the similarity of features in VSM, which help generalize the concept of cosine (and soft cosine) as well as the idea of (soft) similarity."}, {"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Cosine similarity is a measure of similarity between two non-zero vectors of an inner product space. It is defined to equal the cosine of the angle between them, which is also the same as the inner product of the same vectors normalized to both have length 1. The cosine of 0\u00b0 is 1, and it is less than 1 for any angle in the interval (0, \u03c0] radians."}]}, {"question": "Are parameters random", "positive_ctxs": [{"text": "Another view however is that the parameter value used to generate the data that are obtained in your study is just one drawn parameter value, where the draw is from some distribution (the prior). as parameters, but rather as random or latent effects."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Are the state variables discrete or continuous? If they are discrete, do they have only a finite number of possible values?"}, {"text": "Are the state variables discrete or continuous? If they are discrete, do they have only a finite number of possible values?"}, {"text": "Are they the same or different? These difficulties are responsible for the limited application of Huntley's directed length dimensions to real problems."}, {"text": "It is important to obtain some indication about how generalizable the results are. While this is often difficult to check, one can look at the stability of the results. Are the results reliable and reproducible?"}, {"text": "Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements."}, {"text": "It is important to keep in mind the difference between the domain of a family of densities and the parameters of the family. Different values of the parameters describe different distributions of different random variables on the same sample space (the same set of all possible values of the variable); this sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single distribution within the family sharing the functional form of the density."}, {"text": "It is important to keep in mind the difference between the domain of a family of densities and the parameters of the family. Different values of the parameters describe different distributions of different random variables on the same sample space (the same set of all possible values of the variable); this sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single distribution within the family sharing the functional form of the density."}]}, {"question": "How can you tell if someone is highly intelligent", "positive_ctxs": [{"text": "So here are some signs you're highly intelligent, even if you don't feel like it.You're Empathetic And Compassionate. Andrew Zaeh for Bustle. You're Curious About The World. You're Observant. You Have Self-Control. You Have A Good Working Memory. You Like To Go With The Flow.More items\u2022"}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Formulating the problem \u2013 What variables do you want to compare? How many variables do you want to compare? What purpose is the study to be used for?"}, {"text": "Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources \u2014 not for their own sake, but to succeed in its assigned task.A system that is optimizing a function of n variables, where the objective depends on a subset of size k 0 and 0 < p < \u221e,"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "In set theory as Cantor defined and Zermelo and Fraenkel axiomatized, an object is either a member of a set or not. In fuzzy set theory this condition was relaxed by Lotfi A. Zadeh so an object has a degree of membership in a set, a number between 0 and 1. For example, the degree of membership of a person in the set of \"tall people\" is more flexible than a simple yes or no answer and can be a real number such as 0.75."}, {"text": "If the output is considered as undefined when a parameter is undefined, then pow(1, qNaN) should produce a qNaN. However, math libraries have typically returned 1 for pow(1, y) for any real number y, and even when y is an infinity. Similarly, they produce 1 for pow(x, 0) even when x is 0 or an infinity."}, {"text": "In mathematics, real is used as an adjective, meaning that the underlying field is the field of the real numbers (or the real field). For example, real matrix, real polynomial and real Lie algebra. The word is also used as a noun, meaning a real number (as in \"the set of all reals\")."}, {"text": "More generally, exponentiation allows any positive real number as base to be raised to any real power, always producing a positive result, so logb(x) for any two positive real numbers b and x, where b is not equal to 1, is always a unique real number y. More explicitly, the defining relation between exponentiation and logarithm is:"}]}, {"question": "Is XGBoost good for regression", "positive_ctxs": [{"text": "XGboost is the most widely used algorithm in machine learning, whether the problem is a classification or a regression problem. It is known for its good performance as compared to all other machine learning algorithms."}], "negative_ctxs": [], "hard_negative_ctxs": [{"text": "Is the yield of good cookies affected by the baking temperature and time in the oven? The table shows data for 8 batches of cookies."}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?"}, {"text": "Look at the candidates to drop and the components to be dropped. Is there anything that needs to be retained because it is critical to one's construct ? For example, if a conceptually important item only cross loads on a component to be dropped, it is good to keep it for the next round."}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "For example, there are about 600 million tweets produced every day. Is it necessary to look at all of them to determine the topics that are discussed during the day? Is it necessary to look at all the tweets to determine the sentiment on each of the topics?"}, {"text": "Consequential \u2013 What are the potential risks if the scores are invalid or inappropriately interpreted? Is the test still worthwhile given the risks?"}]}]