text
stringlengths
81
47k
source
stringlengths
59
147
Question: <p>This might be a purely notational, but I'm confused about the probability measures at play when using Bayesian inference. It's sufficient to focus on the numerator here. Let's assume that I have a prior over hypotheses $P(H)$ and that these hypotheses are themselves distributions about some data. When some...
https://stats.stackexchange.com/questions/230252/probability-measures-at-play-in-bayesian-inference
Question: <p>How would one infer the number of people that took a test based on the percentages of people that got particular questions correctly. </p> <p>For example <code> 1. 85% 2. 25% 3. 95% 4. 15% 5. 35% </code> $ n = 20 $</p> <p>A caveat is that these percentages actually come with some noise, therefore you can...
https://stats.stackexchange.com/questions/257998/inferring-sample-size-from-proportions
Question: <p>Suppose:</p> <ul> <li>$N \sim {\rm Poisson}(\lambda)$</li> <li>$\lambda$ is unknown, but we believe that it can be assumed $\sim \exp(1)$</li> </ul> <p>If I want to calculate $N | X$, i.e., $P(model | data)$, I need to use the Bayes theorem in the following way:</p> <p>$P(model|data) \propto P(data|mode...
https://stats.stackexchange.com/questions/26199/how-do-i-calculate-a-posterior-distribution-for-a-poisson-model-with-exponential
Question: <p>When specifying a Bayesian model, one can specify weakly-informative priors. However, such priors may represent a concern to many researchers. After all, if they are weakly-informed, one may be concerned that the imprecision of such a prior may be biasing the results. It is also my understanding that the i...
https://stats.stackexchange.com/questions/621780/should-bayesian-inference-be-avoided-with-a-small-sample-size-and-weakly-informa
Question: <p>Suppose we alter side 6 of a die to appear more than 1/6th of the time. We do not know the actual proportion of the time each side of the die will appear because all or some of the other 5 sides do not have the 1/6 proportion too. What do we need to be 99% confidence of each of the 6 sides of this altered ...
https://stats.stackexchange.com/questions/71834/bayesian-approach-on-games-of-chance-with-physical-devices
Question: <p>Say I have $n$ possible events that lead to $m$ observable effects.</p> <p>The Bayesian inference hypothesis is that events are mutually exclusive and jointly exhaustive. Could I still be able to use Bayesian inference (modified, eventually) when more than one event occurs at some observation?</p> <p>Rea...
https://stats.stackexchange.com/questions/76229/bayesian-inference-adaptable-to-not-mutually-exlclusive-events
Question: <p>I have a problem with the following setup. I've been reading <a href="http://www.indiana.edu/~kruschke/DoingBayesianDataAnalysis/" rel="nofollow">"Doing Bayesian Data Analysis: A Tutorial with R and BUGS"</a> and it seems like the Bayesian approach is a good one, but I'm not entirely sure how to model it....
https://stats.stackexchange.com/questions/95081/finding-optimal-parameter-values-using-a-bayesian-model
Question: <p>I designed a Bayesian model and sampled the posterior using a MCMC algorithm. My problem is that the posterior marginal distribution of a given latent intermediate variable appears to be uniform just as the prior I assigned to it. In practice this variable is supposed to have a substential importance on t...
https://stats.stackexchange.com/questions/49496/marginal-posterior-and-prior-are-similar-and-flat
Question: <p>In a population of <span class="math-container">$N$</span>, <span class="math-container">$K$</span> experts pick <span class="math-container">$M_{k\in\{1, ..., K\}}$</span> individuals that will have a certain attribute. Note that <span class="math-container">$M$</span> can be different across experts (e.g...
https://stats.stackexchange.com/questions/535101/bayesian-combination-of-expert-opinion
Question: <p>Suppose I have a questionnaire and I ask respondents how often they eat at McDonalds:</p> <ol> <li>Never</li> <li>Less than once a month</li> <li>At least once a month but less than once a week</li> <li>1-3 times a week</li> <li>More than 3 times a week</li> </ol> <p>I then correlate these answers with w...
https://stats.stackexchange.com/questions/13352/how-do-you-deal-with-a-multiple-choice-observation-in-bayesian-inference-when
Question: <p>Can anyone point me to a reference for calculating Bayesian interval estimates for multinomial probabilities? I am familiar with conventional methods (i.e.: Quesenberry and Hurst (1964), Goodman (1965), Bailey (1980), Fitzpatrick and Scott (1987), and Glaz and Sison (1999)). I have found methods to calcu...
https://stats.stackexchange.com/questions/339357/bayesian-interval-estimates-for-multinomial-probabilities
Question: <p>I am trying to solve the following question with my very rusty stats expertise:</p> <p>I have a data set of people of which some do exercise with different frequencies per month and other don’t exercise at all (base rates of exercise). My data contains all the dates when each person did exercise. At some...
https://stats.stackexchange.com/questions/440155/effect-of-event-on-average-probabilities-given-different-base-rates
Question: <p>I am reading <em>Principles of Statistics</em> (MG Bulmer, 1965) and stumbled upon the problem that Bayes considered when developing his theorem. Bulmer makes use of <span class="math-container">$dP$</span> and I have no idea what it means. In his words:</p> <blockquote> <p>The problem that Bayes himself c...
https://stats.stackexchange.com/questions/519436/problem-faced-by-bayes-when-developing-his-method-for-bayesian-inference
Question: <p>A reformulation of a question that came up in a model:</p> <p>Imagine a toy store that sells <span class="math-container">$K$</span> toys, where our prior is that each toy has equal probability <span class="math-container">$1/K$</span> of being purchased by a customer. Then you have a customer come in and ...
https://stats.stackexchange.com/questions/544034/bayesian-updating-of-a-constant-probability-using-one-data-point
Question: <p>I've recently encountered the "posterior predictive distribution" <span class="math-container">$$p(\bar{x}|X)=E_\theta[p(\bar{x}|\theta)]=\int_\theta p(\bar{x}|\theta)\hspace{0.5mm}p(\theta|X)d\theta$$</span> where <span class="math-container">$\bar{x}$</span> is a new point, <span class="math-container">$...
https://stats.stackexchange.com/questions/438218/intuition-behind-posterior-predictive-distribution
Question: <p>It seems that most Bayesian inference focuses on inferring the posterior. Is it possible to infer both the prior and the posterior.</p> Answer: <p>Your question is ill-posed, it doesn't make sense to "infer" a prior. </p> <p>Let's say you have a likelihood $p(x|\theta)$, where $x$ is the data and $\theta...
https://stats.stackexchange.com/questions/313768/is-it-possible-to-infer-both-prior-and-posterior-simultaneously
Question: <p>Let $X_1$, $X_2$, ..., $X_n$ be iid RV's with range $[0,1]$ but unknown distribution. (I'm OK with assuming that the distribution is continuous, etc., if necessary.)</p> <p>Define $S_n = X_1 + \cdots + X_n$.</p> <p>I am given $S_k$, and ask: What can I infer, in a Bayesian manner, about $S_n$?</p> <p>T...
https://stats.stackexchange.com/questions/24344/bayesian-inference-on-a-sum-of-iid-real-valued-random-variables
Question: <p>In frequentist, i.e., sampling-based statistics, we envision a target population to which inference is made. Notwithstanding the fact that our so-called random samples from this population are usually more convenience-based samples, we try to infer from a sample to the population. For example in a random...
https://stats.stackexchange.com/questions/250793/bayesian-inferential-target
Question: <p>In maximum likelihood theory it is common to summarise parameter estimates by their maxium likelihood estimate $\theta_{\mathrm{MLE}}$ and the corresponding standard error $\sigma_{\mathrm{MLE}}$ or coefficient of variation $$CV = \frac{\sigma_{\mathrm{MLE}}}{\theta_{\mathrm{MLE}}}.$$ This works since we ...
https://stats.stackexchange.com/questions/333749/bayesian-counterpart-to-parameter-estimate-precision
Question: <p>One of the assumptions in a model is the conditional dependence between random variables in the joint prior distribution. Consider the following model, <span class="math-container">$$p(a,b|X) \propto p(X|a,b)p(a,b)$$</span></p> <p>Now suppose an independence assumption for the prior <span class="math-cont...
https://stats.stackexchange.com/questions/414045/does-the-posterior-necessarily-follow-the-same-conditional-dependence-structure
Question: <p>Let <span class="math-container">$X$</span> be the number of hits in <span class="math-container">$N$</span> tries, I know that the probability of the next hit is <span class="math-container">$P(\text{Hit}) =X/N$</span>.</p> <p>How can I get the generic expression for the probability distribution function ...
https://stats.stackexchange.com/questions/592969/probability-of-hitting-x-shots-in-n-tries-knowing-that-the-phit-is-the-ratio-o
Question: <p>Say we have two coins with unknown success probabilities <span class="math-container">$p_1$</span> and <span class="math-container">$p_2$</span>. To know more about the probabilities, say that we use Bayesian approach.</p> <p>To do so, we first set our prior: <span class="math-container">$P_1\sim Beta(1,1)...
https://stats.stackexchange.com/questions/541233/bayesian-inference-from-extra-information-beta-binomial-case
Question: <p>The primary objective of Bayesian inference is to compute the posterior.</p> <p>For instance, if the posterior <span class="math-container">$p(\theta | x)$</span> is known then the expectation <span class="math-container">$\mathbb{E}$</span> of the test function <span class="math-container">$\tau(x)$</span...
https://stats.stackexchange.com/questions/608478/why-does-posterior-prediction-involve-integration-over-all-parameter-space
Question: <p>Let us have 2 groups - treatment group (1) and control group(2). Each group has survival probability $p_1$ and $p_2$ respectively. Ofc each patient survives or dies independently given $p_i$ of his group. Let us define $y_i$ as number of survivors in group i, $n_i - y_i$ - number of deceased in group i.</...
https://stats.stackexchange.com/questions/305815/distribution-of-oddsratio-after-bayesian-inference-under-binomial-model
Question: <p>Assume you have a buyer and a seller.</p> <p>You know that the buyer's probability of buying the good at different prices (ie smth like P(B|price) and similarly for the seller P(S|price).</p> <p>Given you know these you know a transaction happens if both agree - in that case for any given price P(transacti...
https://stats.stackexchange.com/questions/635804/is-there-an-implied-distribution-given-the-below-for-where-a-transaction-happens
Question: <p>Let's say you have a process that generates data according to r = sin(t) + epsilon, where epsilon ~ N(0,V) is Gaussian noise. The unconditional variance of r is 0.5 + V. </p> <p>Let's say we're forecasting r with a model m, and that our forecast is "perfect" in that m = sin(t). Construct v = r - m, which ...
https://stats.stackexchange.com/questions/5453/why-does-continuous-bayesian-analysis-seem-to-give-this-contradictory-result
Question: <p>I'm reading this <a href="https://statswithr.github.io/book/the-basics-of-bayesian-statistics.html" rel="nofollow noreferrer">online book</a> and there is something unclear to me in <a href="https://statswithr.github.io/book/the-basics-of-bayesian-statistics.html#tab:RU-486prior" rel="nofollow noreferrer">...
https://stats.stackexchange.com/questions/445096/understanding-posterior-probability-bayesian-inference
Question: <p>Can the likelihood be defined as the probability of the rate parameter given a range of data. Or as the probability of the data, given a range of rate parameters?</p> Answer: <p>I think I understand your confusion. Typically, Bayes' rule is written as:</p> <p><span class="math-container">$$p(\theta |y) ...
https://stats.stackexchange.com/questions/444781/definition-of-likelihood-in-bayesian-statistics
Question: <p>I've been trying to get into the chapter 4 in Lehmann's <em>Theory of point estimation</em>, but I can't seem to understand his presentation of the Bayesian setup. He starts of by the introduction below and after a few examples of uses of Bayesian estimators he outlines the idea (after dots in my photo). I...
https://stats.stackexchange.com/questions/212866/on-the-bayesian-setup-in-inference
Question: <p>This is what I saw in a source I am referring to:</p> <p><a href="https://i.sstatic.net/uAubu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uAubu.png" alt="enter image description here" /></a></p> <p>Since both the numerator and the denominator are probabilities (so they can only take any ...
https://stats.stackexchange.com/questions/571438/can-the-bayes-factor-be-negative
Question: <p>Given prior <span class="math-container">$ \mu \sim \mathcal{N}(\mu_0, \tau^2) $</span>, likelihood <span class="math-container">$ X_i | \mu \sim \mathcal{N}(\mu, \sigma^2) $</span>, we know the closed-form solution of posterior is <span class="math-container">$ \mu | X_1, X_2, \ldots, X_n \sim \mathcal{N}...
https://stats.stackexchange.com/questions/626327/the-role-of-variance-of-the-distribution-plays-in-bayesian-inference
Question: <p>I have a problem understanding why Bayesian Inference leads to intractable problems. The problem is often explained like this:</p> <p><a href="https://i.sstatic.net/fIwYu.png" rel="noreferrer"><img src="https://i.sstatic.net/fIwYu.png" alt="enter image description here"></a></p> <p>What I don't understan...
https://stats.stackexchange.com/questions/208176/why-is-the-posterior-distribution-in-bayesian-inference-often-intractable
Question: <p>I have often heard that in certain instances, it can be more beneficial to use Bayesian based methods because they provide &quot;a distribution of possible answers&quot; (i.e. the posterior distribution) instead of a single answer (as done in the frequentist case). However, it seems that at the end of the ...
https://stats.stackexchange.com/questions/547923/at-the-end-of-the-day-what-do-you-do-with-bayesian-estimates
Question: <p>I'm currently struggling with a question involving probability and statistics.</p> <p>I have this dataset of sales, and I was trying to make a probability of sales based on that dataset and the data that it provides me of weeks, months and years back. I started using Bayes Theorem to do it, and after a con...
https://stats.stackexchange.com/questions/477324/question-about-probability-vs-inferential-statistics
Question: <p>I can't figure out how to compute the variance of an estimator which is the mean of the posterior distribution let's say Gamma(<span class="math-container">$\sum x_i+3, n+a$</span>) How to find out the variance of this mean ?</p> Answer: <p>In the Bayesian paradigm, distributions of interest are uncertain...
https://stats.stackexchange.com/questions/481595/how-to-compute-the-variance-for-a-bayesian-estimator
Question: <p>In Bayesian statistics, it is often mentioned that the posterior distribution is intractable and thus approximate inference must be applied. What are the factors that cause this intractability? </p> Answer: <p>I had the opportunity to ask <a href="https://scholar.google.com/citations?user=8OYE6iEAAAAJ&amp...
https://stats.stackexchange.com/questions/4417/what-are-the-factors-that-cause-the-posterior-distributions-to-be-intractable
Question: <p><a href="https://i.sstatic.net/ZbJUs.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZbJUs.jpg" alt="enter image description here"></a><a href="https://i.sstatic.net/k8NUI.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/k8NUI.jpg" alt="enter image description here"></a>im havi...
https://stats.stackexchange.com/questions/442512/we-flip-a-coin-20-times-and-observe-12-heads-what-is-the-probability-that-the-c
Question: <p>My problem is thus: given set of time series data <span class="math-container">$D = \{t_m, x_m\}$</span> where <span class="math-container">$m$</span> is a label, <span class="math-container">$m=1,2,3,...,n$</span> I have a model <span class="math-container">$f(t,\mathbf{w})$</span> which generates an equi...
https://stats.stackexchange.com/questions/660688/parameter-covariance-in-bayesian-regression-of-time-series
Question: <p>So, I have a project to test the hypothesis that a marketing campaign with a new art generates more purchases than the old one, I have 2 samples of data, one using the standard ad and one using the new ad. So we have the total number of impressions and the number of purchases. We can estimate this as a <sp...
https://stats.stackexchange.com/questions/653748/can-you-use-the-beta-binomial-distribution-instead-of-mcmc
Question: <p>I am trying to sample from a posterior distribution using a MCMC algorithm using the Metropolis-Hastings sampler.</p> <p>How should I deal with the situations where I'm stuck in regions of the posterior with zero probability?</p> <p>These regions are present because the posterior distribution is truncate...
https://stats.stackexchange.com/questions/74330/how-to-find-the-support-of-the-posterior-distribution-to-apply-metropolis-hastin
Question: <p>Can the approximating distributions for various factors in Expectation propagation be different distributions but still from the exponential family. For example, I have the following posterior form:</p> <p>$$ p(w, \lambda, \phi) = p(\phi)p(\lambda)p(w|\lambda) \prod_{i}p(y_{i}|w, \phi, \lambda) $$</p> <p...
https://stats.stackexchange.com/questions/82348/approximating-distributions-in-expectation-propagation
Question: <p>In complicated Bayesian models, like for instance a hierarchical nonparameteric one, often times it's intractable to do Gibbs or other MCMC sampling methods to convergence. Rather, people tend to do variational inference and use expectation maximization to find the approximate MAP parameters.</p> <p>Is th...
https://stats.stackexchange.com/questions/82946/global-search-operators-for-approximate-map-inference
Question: <p>I'm using Expectation propagation algorithm (<code>infer.net</code> library) for my feature selection problem. </p> <p>I generate input data and test my model. The thing is that when I use different number of data points, I get very different results. </p> <p>For example, in my current setting it really ...
https://stats.stackexchange.com/questions/121757/expectation-propagation-for-feature-selection
Question: <p>I have been trying to implement a Bayesian inference procedure from scratch for a specific problem, but I have implemented the procedure, and it doesn't seem to work. </p> <p>Since, I can't just post the code online and ask community to debug my code, I was wondering if someone could provide with a broade...
https://stats.stackexchange.com/questions/61580/points-to-keep-in-mind-while-implementing-a-nonparametric-bayesian-inference-pro
Question: <p>Suppose you are measuring $n$ quantities with error. Let $\beta_1,\ldots, \beta_n$ represent the true values and $X_1, \ldots, X_n$ represent the measured values of those quantities. Assume that the errors are centered normal. Let $\sigma_i^2\,, i=1, \ldots, n$ represent the <strong>known</strong> standar...
https://stats.stackexchange.com/questions/232037/bayesian-inference-posterior-in-a-simple-model
Question: <p>I know that my prior distribution is Beta(3,3) and that after tossing 12 coins, the number of 'heads' is less than 4 but I don't know the exact number. How do I calculate the posterior density?</p> <p>What I've tried to do is:</p> <p>If <span class="math-container">$X=\#$</span> of heads in <span class="ma...
https://stats.stackexchange.com/questions/553926/coin-tossing-posterior-density-calculation
Question: <p>Suppose we have a hypothesis test: <span class="math-container">$$H_0: \theta≥\theta_0 ~~~ vs~~~ H_1:\theta&lt;\theta_0$$</span></p> <p>With the observation <span class="math-container">$X$</span>, the p-value is calculated by <span class="math-container">$p = P(X|H_0)$</span>. <em>Which means the sum of...
https://stats.stackexchange.com/questions/558394/can-i-interpret-the-p-value-of-a-statistic-test-as-a-part-in-bayesian-formula
Question: <p>My question is, is it possible to perform a Global Sensitivity Analysis on a Bayesian inference model (not just on the prior, the entire model)?</p> <p>A bit of context: I am fairly new to Bayesian statistics. Being a research student in astrobiology, I have read a few papers using Bayesian approaches to t...
https://stats.stackexchange.com/questions/559157/can-a-global-sensitivity-analysis-be-performed-on-bayesian-inference
Question: <p>Hi I am trying to estimate the posteriors of four calibration parameters namely $c_1, c_2, c_3$ and $c_4$ in the following equation using Bayesian inference</p> <p>$$ F=c_1 \cdot (i^{c_2}) \cdot(s^{c_3}) \cdot (1-\exp(c_4 t)) $$</p> <p>I have the observed data for the output $F$ and inputs $i$,$s$ and $t...
https://stats.stackexchange.com/questions/296682/using-indirect-prior-information-in-bayesian-inference
Question: <p>I have 2 model $M_1$ and $M_2$ which both have a gamma distribution and the same priors</p> <p>$H_0 : \quad x_i \sim M_1 \\ H_a: \quad x_i \sim M_2$</p> <p>Both $M_1$ and $M_2$ have prior $\sim Ga(7,3000)$ but my posteriors are </p> <p>$M_1 \sim Ga(191,116665.4) \\ M_2 \sim Ga(192, 116188.9)$</p> <p>I ...
https://stats.stackexchange.com/questions/336127/calculating-bayes-factor-for-2-gamma-distributions
Question: <p>I'm studying for past exam and I'm actually stumped on what a particular question is asking me. I've thought about it for days and I actually just don't know what are they asking. Can anyone interpret the question?</p> <p>It's part (ii)</p> <p>" The Hobbits living in the Shire are not known for being ver...
https://stats.stackexchange.com/questions/341115/can-you-interpret-this-question
Question: <p>I have the following data:</p> <p>31.0, 30.5, 20.6, 27.2, 26.5, 28.1, 25.8, 29.6, 30.0, 25.8, 25.1, 27.9, 23.0, 29.4, 28.7, 25.0, 31.1, 24.8, 24.8, 27.0, 22.3, 29.5, 31.5, 26.2, 24.6, 23.2, 25.7, 24.2, 28.8, 27.4, 29.6, 23.5, 26.4, 28.7, 25.5, 18.6, 25.2, 24.5, 27.9, 33.0, 21.4, 34.4, 27.2, 23.3, 29.3, 31...
https://stats.stackexchange.com/questions/346378/estimate-the-mean-and-variance-95-hpd-credible-region-using-bayesian-inference
Question: <p>I have used Bayesian reasoning in my research work and it has been extremely useful. The book I have read is E.T. Jayne's <em>Probability theory</em>. The idea is to formulate propositions and then probability theory tells how to assign numbers (viz. probability) to those propositions, conditional on one's...
https://stats.stackexchange.com/questions/367227/bayesian-updating-for-coin-toss
Question: <h2>Background</h2> <p>Suppose we have a model such that <span class="math-container">$Y \sim \mathcal{M}(\theta)$</span> is a discrete random variable taking values in <span class="math-container">$[0, 1, \ldots]$</span>. We would like to make inference about <span class="math-container">$\theta$</span> fro...
https://stats.stackexchange.com/questions/374316/truncated-count-model-including-information-about-the-number-of-unobserved-re
Question: <p>I have a random sample <span class="math-container">$X_1, X_2, ..., X_n$</span> with <span class="math-container">$X_i$</span> having a pdf</p> <p><span class="math-container">$$ f(x;\theta) = 2\theta^2x^{-2} $$</span></p> <p>I'd like to find the MLE of <span class="math-container">$\theta$</span>.</p> ...
https://stats.stackexchange.com/questions/387298/trouble-with-mle
Question: <h3>Question</h3> <p>I don't understand how when integrating over the parameters in the posterior predictive, the integration "disappears". It's hard for me to ask simply because I am confused, so here is an example.</p> <h3>Example</h3> <p>Imagine we have a Gaussian model with unknown mean <span class="ma...
https://stats.stackexchange.com/questions/402552/posterior-predictive-what-happens-to-integral-over-parameters
Question: <p>Suppose that we want to make the best Bayesian inference about some value <span class="math-container">$\mu$</span> we have some normal prior about it. I.e. <span class="math-container">$\mu\sim N(\mu_0, \sigma_0^2)$</span> with known parameters. To do so, we can choose parameters <span class="math-contain...
https://stats.stackexchange.com/questions/417198/best-sampling-method-within-the-normal-family
Question: <p>Suppose I have <span class="math-container">$K$</span> classes with distribution <span class="math-container">$\theta$</span> over <span class="math-container">$\{1,...,K\}$</span> and an underlying domain <span class="math-container">$D$</span> on which each class defines a categorical distribution <span ...
https://stats.stackexchange.com/questions/448987/posterior-probability-of-hypothesis-distributions
Question: <p>Given a biased die with d faces, you are given results of n die rolls. I need to calculate the confidence intervals of the probabilities of each of the d outcomes of the die.</p> <p>A solution in R! - even better.</p> Answer:
https://stats.stackexchange.com/questions/455919/confidence-intervals-for-probabilities-of-a-biased-die
Question: <p>Suppose I have a set of strings <span class="math-container">$S$</span> and I want to find out whether these strings have a certain type. To be more specific, I want to find out whether these strings are surnames. Suppose I have a large list <span class="math-container">$L$</span> of surnames from many reg...
https://stats.stackexchange.com/questions/511886/applying-bayesian-reasoning-to-estimate-the-type-of-a-feature
Question: <p>In a Bayesian hypothesis test between two alternatives A and B, what is the probability of making a type I and type II error?</p> <p>This question has been asked many times on this forum in various formats: Is Bayesian hypothesis testing immune to peaking? What is the optimal stopping point? If the Bayes ...
https://stats.stackexchange.com/questions/387974/bayesian-hypothesis-test-type-i-and-ii-errors
Question: <p>In Bayesian inference, it is said that <em><strong>for large samples, the posterior density is dominated by the likelihood. Furthermore, in the region where the likelihood is large, the posterior density is nearly constant.</strong></em> Could you kindly explain the logic behind such as a statement? I woul...
https://stats.stackexchange.com/questions/519250/in-bayesian-inference-it-is-said-that-for-large-samples-the-posterior-density
Question: <p>Imagine I have an item I want to sell to a person. I know for sure that the person is not willing to pay more than \$X for this item, but I don't know which value between \$0 and \$X they <em>are</em> willing to pay for it, and I'm equally uncertain about all of them. So if I call how much they're willing ...
https://stats.stackexchange.com/questions/434205/how-to-optimise-waterfall-questions-of-purchase-value
Question: <p>Suppose we have two datasets <em>df_1</em> with variables {A,B,C} and <em>df_2</em> with variables {A,C,D} (A &amp; C are the only mutual variable in the two datasets). Our aim is to predict A using B &amp; C or C &amp; D (depending on which pair is given). The simplest approach is to model A using <em>df_...
https://stats.stackexchange.com/questions/541925/fitting-a-single-model-to-different-datasets-that-include-different-variables
Question: <p>I have a very basic knowledge in statistics, so I am struggling a bit with the ideas of Bayesian inference.</p> <p>My data model looks like this,</p> <p><span class="math-container">$$ z(t) = \sum_{n = 1}^{N} e^{j 4\pi/\lambda \sqrt{(x_{n, t -1} + u_n.dt)^2 + (y_{n, t -1} + v_n.dt)^2}} + \mathcal{N}(0, \si...
https://stats.stackexchange.com/questions/563991/how-to-find-the-likelihood-probability-of-an-exponential-data-model
Question: <p>When we are uncertain about the probability of head, <span class="math-container">$p_H$</span>, in a coin tossing, we often model it using a Beta prior as follows: <span class="math-container">$$p_H\sim \text{Beta}(a_0,b_0),$$</span> for some parameters <span class="math-container">$a_0,b_0$</span>. </p> ...
https://stats.stackexchange.com/questions/432710/understanding-convergence-in-bayesian-inference-of-coin-tossing
Question: <p>I am developing a Bayesian system in which I would like to quantify the evidence for or against the conclusion that one data-generating process (X, for which we observe X = x) will produce a more extreme result than another process (Y, for which we observe Y = y).</p> <p>For my purposes, by &quot;more extr...
https://stats.stackexchange.com/questions/552919/bayesian-inference-of-prx-y-where-x-and-y-each-have-an-approximate-posterior
Question: <p>I am working with a dataset regarding transmission rate for a disease spreading among cattle at different farms during a 5-month period.</p> <p>The goal is to estimate the transmission parameter <span class="math-container">$\alpha$</span> using a Bayesian model.</p> <p>I have a dataset with 12 entries(dif...
https://stats.stackexchange.com/questions/576071/bayesian-modelwrite-out-likelihood-and-prior
Question: <p>How can I compute the Maximum A Posteriori (MAP) estimate of <span class="math-container">$\theta$</span> with those informations: a discrete random variable y with values in {1, 2, . . . , N} has a Binomial distribution depending on the unknown probability <span class="math-container">$\theta \in (0,1)$</...
https://stats.stackexchange.com/questions/578414/compute-the-maximum-a-posteriori-map-estimate-of-%ce%b8
Question: <p>The prior of an inference problem where we try to infer <span class="math-container">$x$</span> from observations <span class="math-container">$y$</span> is defined as <span class="math-container">$P(X)$</span>. Often (<a href="https://arxiv.org/pdf/1010.5141.pdf" rel="nofollow noreferrer">e.g.</a>) I see ...
https://stats.stackexchange.com/questions/578510/bayesian-prior-definition
Question: <p><strong>Setup:</strong></p> <p>The relationship between the beta and binomial distributions is well known.</p> <p><span class="math-container">$$\frac{\pi^{\alpha - 1} (1 - \pi)^{\beta - 1}}{B(\alpha, \beta)} \leftrightarrow {{n}\choose{x}}\pi^{x} (1 - \pi)^{n-x}$$</span></p> <p>By comparing the two, one c...
https://stats.stackexchange.com/questions/581139/reasonable-to-incorporate-sample-size-into-beta-binomial
Question: <p>I am wondering if there are any good rules of thumb for how to go about selecting an approximate inference algorithm for a problem/model (specifically when exact inference is intractable)? When you are faced with a problem, what are the things you consider when selecting an approach for inference (e.g. MCM...
https://stats.stackexchange.com/questions/32214/how-to-go-about-selecting-an-algorithm-for-approximate-bayesian-inference
Question: <p>I need to learn both the VB and EM methods for Bayesian Networks. Before going into detail of both algorithms, which I am a bit aware of, I need to EXACTLY understand the basic motivations behind them. Different resources use the terms "inference, parameters, estimation, learning" so intermingled that I ea...
https://stats.stackexchange.com/questions/82184/comparison-of-variational-bayes-and-expectation-maximization-algorithms
Question: <p>The well known "German tank problem" shows how to answer the question: "If I have tanks which have an increasing serial number, and I see a sample of tanks and record their serial numbers, what is the likely total number of tanks". This question is analogous but is where there is no ordering to the observ...
https://stats.stackexchange.com/questions/109166/estimating-total-number-of-people-from-an-observed-sample
Question: <p>Suppose that we take a sample ($X_1, X_2, ... X_n$) from a distribution where we assume that $X_i $~$ Bin(n_i, p_i)$ and $n_i$ is known for every $i$. We also assume that $p_i$'s are independent and identically distributed, $p_i$ ~ $D$, where $D$ is some unknown distribution. $n_i$ cannot be assumed to b...
https://stats.stackexchange.com/questions/114139/inferring-prior-distribution
Question: <p>I am reading through a document on <a href="http://research.microsoft.com/en-us/um/cambridge/projects/infernet/docs/Mixture%20of%20Gaussians%20tutorial.aspx" rel="nofollow">learning Gaussian mixture models in Infer.NET</a>. They assume the data is generated from 2 Gaussians where the prior distribution o...
https://stats.stackexchange.com/questions/179882/is-the-posterior-distribution-on-means-in-a-bayesian-gaussian-mixture-model-with
Question: <p>This is more of a philosophical question, but from a purely Bayesian standpoint how does one actually form prior knowledge? If we need prior information to carry out valid inferences then there seems to be a problem if we have to appeal to past experience in justifying today's priors. We're apparently le...
https://stats.stackexchange.com/questions/201686/how-is-prior-knowledge-possible-under-a-purely-bayesian-framework
Question: <p>I have a variable that is a recursive function involving other variables with known distributions (see problem below). </p> <ul> <li>Let $b(t+1) = b(t) + C \sqrt{b(t)}$ where I know $C \sim N(1.82, .0298)$ and the initial value of $b$ [$b_{initial} \sim N(.02,0.0036)$].</li> </ul> <p>My observation for u...
https://stats.stackexchange.com/questions/216164/how-do-i-perform-bayesian-updating-for-a-function-of-multiple-parameters-each-w
Question: <p>Its common misinterpretation of a 95% confidence interval to say that that 95% of the time the true value lies within that interval.</p> <p>However, in Bayesian statistics, the 95% credible interval contains 95% of the probability from the probability density function. And if I repeat the experiment many...
https://stats.stackexchange.com/questions/286761/if-i-do-the-same-experiment-many-times-then-does-a-95-credible-interval-mean-95
Question: <p>In the Bayes rule, it is said that the posterior $$ P(\theta|D) = \frac{P(D|\theta)P(\theta)}{P(D)} $$ is <em>intractable</em>, because $$ P(D) = \int P(D,\theta) d\theta $$ and the latter is often a high-dimensional integral.</p> <p>See <a href="https://stats.stackexchange.com/questions/208176/wh...
https://stats.stackexchange.com/questions/300296/intractable-posterior-why-not-use-kernel-density-for-the-data-distribution
Question: <p>In my current study I am looking at the effects of creatine monohydrate ingestion on ground reaction force and repeated sprint times. With CM as the independent variable, and ground reaction force and repeated sprint times as the dependent variables, what Bayesian inferential test would be the best to cond...
https://stats.stackexchange.com/questions/323062/what-bayesian-test-to-conduct-with-one-independent-variable-and-two-dependent-va
Question: <p>Consider a coin with bias $p$. We generate a random sample $x_1, \dots, x_n \sim \text{Bernoulli}(p)$, but <strong>we do not observe results of these coin tosses</strong>. Instead, for each $x_i$, we observe a set of features $y_{i1}, \dots, y_{im}$ about the flip, e.g. the height of the toss, the coin's r...
https://stats.stackexchange.com/questions/355646/bayesian-inference-of-a-coins-bias-when-we-dont-directly-observe-the-flips
Question: <p>I am new to the Bayesian world, and I'm trying to understand how hypotheses tests are performed here (as opposed to the frequentist framework).</p> <p>I am aware that likelihoods, priors and posteriors can be discrete or continuous. And once we have calculated posteriors, we can build a lot of things like...
https://stats.stackexchange.com/questions/421269/bayesian-hypothesis-tests-with-continuous-priors
Question: <p>I am trying to implement the model given in <a href="http://proceedings.mlr.press/v84/andersen18a/andersen18a.pdf" rel="nofollow noreferrer">http://proceedings.mlr.press/v84/andersen18a/andersen18a.pdf</a> where they have used mean-field variational inference for posterior inference, but I want to use MCMC...
https://stats.stackexchange.com/questions/428676/how-to-start-coding-for-posterior-inference
Question: <p>I have a general question about Bayesian inference which may help me solve a problem I have. It is best to illustrate this with an example. Inspired from this great post by AllenDowney:</p> <p><a href="https://github.com/AllenDowney/BiteSizeBayes/blob/master/08_soccer_soln.ipynb" rel="nofollow noreferrer"...
https://stats.stackexchange.com/questions/461473/real-time-bayesian-updating-how-to-link-posteriors
Question: <p>Consider you have an initial bag of unique and identifiable items <span class="math-container">$(1.. K)$</span>. From this bag, someone used an arbitrary criteria to tag <span class="math-container">$N$</span> items. You don't know the chosen criteria (which can be anything, from odd numbers, to just the i...
https://stats.stackexchange.com/questions/472669/bayesian-estimation-of-the-underlying-population-size-knowing-its-upper-bound
Question: <p>Bayesian inference is drawn from the posterior distribution or - in case we are interested in forecasting - from the predictive posterior distribution. However, these values are heavily affected by the choice of the prior, even if you have decided to go for an uninformed one (which can be implemented in ma...
https://stats.stackexchange.com/questions/178502/how-to-perform-a-sensitivity-analysis-in-bayesian-statistics
Question: <blockquote> <p>"Because [Bayesian Inference] respects the forward flow of time or information, there's no need for nor availability of methods for correcting for multiplicity ... The evidence of one question is not tilted by whether other questions are being asked."</p> </blockquote> <p><a href="htt...
https://stats.stackexchange.com/questions/437456/bayesian-inference-in-the-presence-of-multiple-hypotheses
Question: <p>I've been recently studying Bayesian inference with PyMC3. I understand the flexibility that comes with multiple possible options for initial distribution choices, yet I can't seem to understand why one would need the sampling part. I realize this is a very naive question, yet I cant seem to understand why...
https://stats.stackexchange.com/questions/330554/bayesian-inference-a-use-case
Question: <p>Let $X_1$, $X_2$, ..., $X_n$ be iid RV's following a mixture distribution of two lognormals such that the pdf of each $X_i$ is $f_{mix}(x)=pf_1(x) + (1-p)f_2(x)$ where $f_1(x)$ and $f_2(x)$ are lognormal pdfs with parameters $\mu_1,\sigma$ and $\mu_2,\sigma$, respectively.</p> <p>Define $S_i$ as a sum of ...
https://stats.stackexchange.com/questions/273473/bayesian-inference-on-a-sum-of-iid-random-variables-with-known-distribution
Question: <p>suppose I've got data <span class="math-container">$X$</span> from a model driven by parameter <span class="math-container">$\theta$</span>. Model of data is represented by conditional density (likelihood function) <span class="math-container">$$f(x|\theta).$$</span> Suppose the prior density of <span clas...
https://stats.stackexchange.com/questions/600027/correctness-of-product-of-densities-representing-parts-of-information-as-prior-d
Question: <p>In order to understand the difference between the Frequentist and Bayesian inference, I was reading the presentation at: <a href="http://www.stat.ufl.edu/archived/casella/Talks/BayesRefresher.pdf" rel="nofollow">http://www.stat.ufl.edu/archived/casella/Talks/BayesRefresher.pdf</a> . In order to explain the...
https://stats.stackexchange.com/questions/112322/question-about-the-bayesian-inference-of-a-parameter
Question: <p>I am confused about one aspect of the use of Gaussian processes for Bayesian inference. I understand that it relies on the assumption that your train and test data points form a multivariate normal distribution where you define a prior mean and covariance for the distribution. What I don't understand is th...
https://stats.stackexchange.com/questions/612663/how-are-custom-kernel-functions-in-gaussian-processes-statistically-justified
Question: <p>In Bayesian inference we end up with the formula:</p> <p><span class="math-container">$$ P(\mathbf{w|t,X)}= \frac{P(\mathbf{t|w,X)}P(\mathbf{w)}}{\int P(\mathbf{t|w,X}) P(\mathbf{w}) d\mathbf{w}}$$</span></p> <p>Assume the prior <span class="math-container">$P(w)$</span> is a Gaussian distribution with 0...
https://stats.stackexchange.com/questions/430842/why-does-the-marginal-likelihood-integral-have-no-closed-form-solution
Question: <p>Suppose I have the Bayesian network in the figure and the corresponding conditional probability table for each node, where A and B are the hidden variables, and C and D are the observed variables. What probabilistic inference algorithm can I use to get all the conditional probabilities in Table - 1? can I ...
https://stats.stackexchange.com/questions/617132/inference-in-bayesian-networks-with-hidden-variables
Question: <p>Suppose that the proportion θ of defective items in a large manufactured lot is known to be either 0.05 or 0.15, and the prior pmf of θ is as follows: ξ(0.05) = a and ξ(0.15) = b. Suppose also that when n = 10 items are selected at random from the lot, it is found that X = 5 of them are defective.</p> <p>...
https://stats.stackexchange.com/questions/325864/finding-values-a-and-b-to-get-pmf-with-certain-mean-and-standard-deviation
Question: <p>I understand that in a decision perspective, identifiability of a model is needed to ensure the convergence (with increasing number of observations) of the parameters to estimate through a single value. But, if the non-identifiability of a given model is not a modeling artifacts but clearly characterises s...
https://stats.stackexchange.com/questions/60446/whats-the-problem-with-model-identifiability
Question: <p>Please, anybody could explain the steps to compute the highest posterior density (HPD) interval, when the posterior distribution is known? For instance, when the posterior distribution is Beta distributed.</p> <p>When the posteriori distribution is simulated, the <a href="https://www.jstor.org/stable/1390...
https://stats.stackexchange.com/questions/304957/how-to-construct-the-highest-posterior-density-hpd-interval
Question: <p>I am working in a Bayesian framework: I have some observations $y$, for which I assume a statistical model. The model depends on parameters $\theta \in \Theta$ ($\Theta$ is the parameters space). I assume a probability distribution $q$ on $\Theta$. The parameters of this model can be estimated in a <em>max...
https://stats.stackexchange.com/questions/247677/change-of-variable-in-posterior-distribution
Question: <p>I was reading an extract from the book &quot;regression and other stories&quot; and at chapter 9 the author distinguish between 3 cases</p> <p>&quot;After fitting a regression, <span class="math-container">$y = a + bx + error$</span>, we can use it to predict a new data point, or a set of new data points,...
https://stats.stackexchange.com/questions/609894/difference-between-the-linear-predictor-with-uncertainty-and-predictive-distribu