Datasets:

Modalities:
Text
Languages:
English
Libraries:
Datasets
License:
zhangir-azerbayev
fetch wiki doesnt work, need to do this right
c0ecdaa
raw
history blame
5.47 kB
In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred. If the event of interest is A and the event B is known or assumed to have occurred, "the conditional probability of A given B", or "the probability of A under the condition B", is usually written as P(A or occasionally P_B(A). This can also be understood as the fraction of probability B that intersects with A: $P(A \mid B) = \frac{P(A \cap B)}{P(B)}$.
For example, the probability that any given person has a cough on any given day may be only 5%. But if we know or assume that the person is sick, then they are much more likely to be coughing. For example, the conditional probability that someone unwell (sick) is coughing might be 75%, in which case we would have that P(Cough) = 5% and P(Cough = 75%. Although there is a relationship between A and B in this example, such a relationship or dependence between A and B is not necessary, nor do they have to occur simultaneously.
P(A may or may not be equal to P(A) (the unconditional probability of A). If P(A, then events A and B are said to be independent: in such a case, knowledge about either event does not alter the likelihood of each other. P(A (the conditional probability of A given B) typically differs from P(B. For example, if a person has dengue fever, the person might have a 90% chance of being tested as positive for the disease. In this case, what is being measured is that if event B (having dengue) has occurred, the probability of A (tested as positive) given that B occurred is 90%, simply writing P(A = 90%. Alternatively, if a person is tested as positive for dengue fever, they may have only a 15% chance of actually having this rare disease due to high false positive rates. In this case, the probability of the event B (having dengue) given that the event A (testing positive) has occurred is 15% or P(B = 15%. It should be apparent now that falsely equating the two probabilities can lead to various errors of reasoning, which is commonly seen through base rate fallacies.
While conditional probabilities can provide extremely useful information, limited information is often supplied or at hand. Therefore, it can be useful to reverse or convert a condition probability using Bayes' theorem: $P(A|B) = {{P(B|A)*P(A)}\over{P(B)}}$. Another option is to display conditional probabilities in conditional probability table to illuminate the relationship between events.
Given two events A and B from the sigma-field of a probability space, with the unconditional probability of B being greater than zero (i.e., P(B) > 0), the conditional probability of A given B ($P(A \mid B)$) is the probability of A occurring if B has or is assumed to have happened. A is assumed to a set of all possible outcomes of an experiment or random trial that has a restricted or reduced sample space. The conditional probability can be found by the quotient of the probability of the joint intersection of events A and B ($P(A \cap B)$) -- the probability at which A and B occur together, although not necessarily occurring at the same time-- and the probability of B: Such
$$
n
$$-bounded partial conditional probability can be defined as the conditionally expected average occurrence of event
$$
A
$$ in testbeds of length
$$
n
$$ that adhere to all of the probability specifications
$$
B_i \equiv b_i
$$, i.e.:
<math>P^n(A\mid B_1 \equiv b_1, \ldots, B_m \equiv b_m)=
\operatorname E(\overline{A}^n\mid\overline{B}^n_1=b_1, \ldots, \overline{B}^n_m=b_m)
</math> The new information can be incorporated as follows:
Let Ω be a sample space with elementary events {ω}, and let P be the probability measure with respect to the σ-algebra of Ω. Suppose we are told that the event B ⊆ Ω has occurred. A new probability distribution (denoted by the conditional notation) is to be assigned on {ω} to reflect this. All events that are not in B will have null probability in the new distribution. For events in B, two conditions must be met: the probability of B is one and the relative magnitudes of the probabilities must be preserved. The former is required by the axioms of probability, and the latter stems from the fact that the new probability measure has to be the analog of P in which the probability of B is one - and every event that is not in B, therefore, has a null probability. Hence, for some scale factor α, the new distribution must satisfy:
#$\omega \in B : P(\omega\mid B) = \alpha P(\omega)$
#$\omega \notin B : P(\omega\mid B) = 0$
#$\sum_{\omega \in \Omega} {P(\omega\mid B)} = 1.$
Substituting 1 and 2 into 3 to select α:
<math>\begin{align}
1 &= \sum_{\omega \in \Omega} {P(\omega \mid B)} \\
&= \sum_{\omega \in B} {P(\omega\mid B)} + \cancelto{0}{\sum_{\omega \notin B} P(\omega\mid B)} \\
&= \alpha \sum_{\omega \in B} {P(\omega)} \\[5pt]
&= \alpha \cdot P(B) \\[5pt]
\Rightarrow \alpha &= \frac{1}{P(B)}
\end{align}</math>
So the new probability distribution is
#$\omega \in B: P(\omega\mid B) = \frac{P(\omega)}{P(B)}$
#$\omega \notin B: P(\omega\mid B) = 0$
Now for a general event A,
<math>\begin{align}
P(A\mid B)
&= \sum_{\omega \in A \cap B} {P(\omega \mid B)} + \cancelto{0}{\sum_{\omega \in A \cap B^c} P(\omega\mid B)} \\
&= \sum_{\omega \in A \cap B} {\frac{P(\omega)}{P(B)}} \\[5pt]
&= \frac{P(A \cap B)}{P(B)}
\end{align}</math>