helboukkouri's picture
fix typo
308f622
from typing import List
import random
import gradio as gr
SAMPLE_SPACE = [1, 2, 3, 4, 5, 6]
def random_event() -> List[int]:
n = random.randint(1, len(SAMPLE_SPACE))
return random.sample(SAMPLE_SPACE, n)
def convert_to_str(outcomes: List[int]):
return ", ".join(map(str, outcomes))
def parse_str(outcomes: str):
if not outcomes:
return []
try:
return list(map(int, outcomes.split(",")))
except ValueError:
raise ValueError("Please enter a list of integers separated by commas.")
def compute_probability(favorable_outcomes: List[int], possible_outcomes: List[int]):
assert all(outcome in possible_outcomes for outcome in favorable_outcomes), "Favorable outcomes must be a subset of possible outcomes."
return len(favorable_outcomes) / len(possible_outcomes)
# Create a Gradio interface
css = """
.gradio-container {
width: 40%!important;
min-width: 800px;
}
details {
border: 1px solid #aaa;
border-radius: 4px;
padding: 0.5em 0.5em 0;
}
summary {
font-weight: bold;
margin: -0.5em -0.5em 0;
padding: 0.5em;
}
details[open] {
padding: 0.5em;
}
details[open] summary {
border-bottom: 1px solid #aaa;
margin-bottom: 0.5em;
}
"""
with gr.Blocks(css=css) as demo:
gr.Markdown(
"""
# Probability Basics (Pt. 3)
<div align="center">
<br>
<p>Let's learn discover more fundamental principles of Probability theory!</p>
<br>
</div>
Welcome to this new segment of the ***Probability Basics series***! πŸŽ‰
<br>So far, we have covered the following:
- [Part 1](https://huggingface.co/spaces/helboukkouri/probability-basics-pt1): **preliminary concepts**β€”sample spaces, events and how to calculate the probability of simple events;
- [Part 2](https://huggingface.co/spaces/helboukkouri/probability-basics-pt2): **the axioms of probability**β€”non-negativity, normalization, and additivity.
In this part we will introduce the concept of `conditional probability`, along with the idea of event `independence`.
<br>***NOTE**: It is easy to confuse `disjoint` and `independent` events, but they are not the same thing. So stay tuned to learn more!*
"""
)
gr.Markdown(
r"""
## Conditional Probability
Sometimes we are not interested in the probability of an event in isolation. Instead, we might want to know the probability of an event given that another event has already occurred. This is known as `conditional probability` and is denoted as:
$$P(A|B)$$
which reads as `the probability of event A given that event B has occurred`.
When an event `B` has a **non-zero probability** of happening, the conditional probability of `A` given `B` is defined as:
$$P(A|B) = \frac{P(A \cap B)}{P(B)}$$
This also gives us a nice way of computing the probability of two events `A` and `B` occurring together:
$$P(A \cap B) = P(A|B) \times P(B) = P(B|A) \times P(A)$$
Intuitively, for `A` and `B` to occur together, first, either A or B has to happen, then, the other one occurs given the first πŸ€“.
In the context of a six-sided die, we can easily compute the probabilities and check these formulas.
"""
)
with gr.Column():
with gr.Row():
randomize = gr.Button(value="Randomize Events")
compute = gr.Button(value="Compute")
with gr.Row():
with gr.Column():
outcomes_A = gr.Textbox(label="Favorable Outcomes: A", value="3, 4")
outcomes_B = gr.Textbox(label="Favorable Outcomes: B", value="2, 4, 6")
outcomes_AB = gr.Textbox(label="Favorable Outcomes: A ∩ B", value="4")
all_outcomes = gr.Textbox(label="All Possible Outcomes", value="1, 2, 3, 4, 5, 6", interactive=False)
with gr.Column():
proba_A = gr.Textbox(label="Probability: A", value="", interactive=False)
proba_B = gr.Textbox(label="Probability: B", value="", interactive=False)
proba_AB = gr.Textbox(label="Probability: A ∩ B", value="", interactive=False)
proba_AgB = gr.Textbox(label="Probability: A | B", value="", interactive=False)
randomize.click(
lambda: convert_to_str(random_event()),
inputs=[],
outputs=outcomes_A
)
outcomes_A.change(
lambda a, b: convert_to_str(set(parse_str(a)).intersection(parse_str(b))),
inputs=[outcomes_A, outcomes_B],
outputs=outcomes_AB
)
randomize.click(
lambda: convert_to_str(random_event()),
inputs=[],
outputs=outcomes_B
)
outcomes_B.change(
lambda a, b: convert_to_str(set(parse_str(a)).intersection(parse_str(b))),
inputs=[outcomes_A, outcomes_B],
outputs=outcomes_AB
)
compute.click(
lambda a, b: f"{compute_probability(parse_str(a), parse_str(b)):.2%}",
inputs=[outcomes_A, all_outcomes],
outputs=proba_A
)
compute.click(
lambda a, b: f"{compute_probability(parse_str(a), parse_str(b)):.2%}",
inputs=[outcomes_B, all_outcomes],
outputs=proba_B
)
compute.click(
lambda a, b: f"{compute_probability(parse_str(a), parse_str(b)):.2%}",
inputs=[outcomes_AB, all_outcomes],
outputs=proba_AB
)
compute.click(
lambda a, b, c: f"{compute_probability(parse_str(a), parse_str(b)) / compute_probability(parse_str(c), parse_str(b)):.2%}",
inputs=[outcomes_AB, all_outcomes, outcomes_B],
outputs=proba_AgB
)
gr.Markdown(
r"""
## Bayes' Rule
`Bayes' Rule` is a fundamental theorem in probability theory that allows us switch the condition and the event in a conditional probability. For two events `A` and `B`, Bayes' Rule states that:
$$P(A|B) = \frac{P(B|A) \times P(A)}{P(B)}$$
This is easily derived from the definition of conditional probability. Specifically from:
$$P(A|B) \times P(B) = P(A \cap B) = P(B \cap A) = P(B|A) \times P(A)$$
## Law of Total Probability
The `law of total probability` allows us to break down an event `A` according to a set of events whose probabilities are easier to tackle.
<br>
Let's say we have some events:
$$B_1, B_2, \ldots, B_n$$
that form a `partition` of the sample spaceβ€”meaning that the events do not overlap and together they cover the entire sample space.
<br>Then we can express the probability of event `A` as:
$$P(A) = \sum_{i=1}^{n} P(A|B_i) \times P(B_i) = P(A|B_1) \times P(B_1) + \ldots + P(A|B_n) \times P(B_n)$$
In many cases, we can use a single intermediate event to compute:
$$P(A) = P(A|B) \times P(B) + P(A|\text{not }{B}) \times P(\text{not }{B})$$
This is a very useful tool in probability theory and is used in many real-world applications.
## The Monty Hall problem
The Monty Hall problem is a famous probability puzzle that shows how counterintuitive probability can be. It is also a good illustration of we can decompose a complex probability into simple ones using `conditional probability` and the `law of total probability`.
The problem is as follows:
- You are a contestant on the show and are presented with `three doors`.
- Behind one of the doors is a `car`, and behind the other two are `goats`.
- You choose a door to begin with.
- The host, who knows what is behind each door, opens one of the other two doors to reveal a `goat`.
- You are then given the option to change your choice to the other unopened door.
The question is: `should you change your choice or stick with your initial choice? πŸ€”`
<br><br>
You can play around with the following simulation to see how the probabilities change as you make your choices.
"""
)
with gr.Column():
with gr.Row():
shuffle_doors = gr.Button(value="Shuffle Doors")
reset_game = gr.Button(value="Reset Game", visible=False)
with gr.Row():
choose_1 = gr.Button(value="Choose Door nΒ°1", visible=False)
choose_2 = gr.Button(value="Choose Door nΒ°2", visible=False)
choose_3 = gr.Button(value="Choose Door nΒ°3", visible=False)
reveal_first = gr.Button(value="Reveal First Goat", visible=False)
reveal_all = gr.Button(value="Reveal Remaining Doors", visible=False)
with gr.Row():
door_1 = gr.Textbox(label="Door 1", value="", interactive=False)
door_1_ = gr.Textbox(value="🐐", interactive=False, visible=False)
choice_1 = gr.Textbox(value="", interactive=False, visible=False)
door_2 = gr.Textbox(label="Door 2", value="", interactive=False)
door_2_ = gr.Textbox(value="πŸš—", interactive=False, visible=False)
choice_2 = gr.Textbox(value="", interactive=False, visible=False)
door_3 = gr.Textbox(label="Door 3", value="", interactive=False)
door_3_ = gr.Textbox(value="🐐", interactive=False, visible=False)
choice_3 = gr.Textbox(value="", interactive=False, visible=False)
def game_initial_state():
return (
gr.update(visible=True),
gr.update(visible=False),
gr.update(visible=False),
gr.update(visible=False),
gr.update(visible=False),
gr.update(visible=False),
gr.update(visible=False)
)
def after_shuffle_doors():
return (
gr.update(visible=False),
gr.update(visible=True),
gr.update(visible=True),
gr.update(visible=True),
gr.update(visible=False),
gr.update(visible=False),
gr.update(visible=False)
)
def after_door_choice():
return (
gr.update(visible=False),
gr.update(visible=False),
gr.update(visible=False),
gr.update(visible=False),
gr.update(visible=True),
gr.update(visible=False),
gr.update(visible=False)
)
def after_first_reveal():
return (
gr.update(visible=False),
gr.update(visible=False),
gr.update(visible=False),
gr.update(visible=False),
gr.update(visible=False),
gr.update(visible=True),
gr.update(visible=False)
)
def after_final_reveal():
return (
gr.update(visible=False),
gr.update(visible=False),
gr.update(visible=False),
gr.update(visible=False),
gr.update(visible=False),
gr.update(visible=False),
gr.update(visible=True)
)
def show_choice_buttons():
gr.update(choose_1, visible=True)
gr.update(choose_2, visible=True)
gr.update(choose_3, visible=True)
def hide_reveal_buttons():
gr.update(reveal_first, visible=False)
gr.update(reveal_all, visible=False)
def show_first_reveal_button():
return (
gr.update(visible=True),
gr.update(visible=False)
)
def show_final_reveal_button():
gr.update(reveal_first, visible=True)
gr.update(reveal_all, visible=True)
shuffle_doors.click(
lambda: list(random.sample(["πŸš—", "🐐", "🐐"], 3)) + [""] * 3,
inputs=[],
outputs=[door_1_, door_2_, door_3_, door_1, door_2, door_3]
)
shuffle_doors.click(
after_shuffle_doors,
inputs=[], outputs=[shuffle_doors, choose_1, choose_2, choose_3, reveal_first, reveal_all, reset_game]
)
choose_1.click(lambda: [1, 0, 0], inputs=[], outputs=[choice_1, choice_2, choice_3])
choose_1.click(after_door_choice, inputs=[], outputs=[shuffle_doors, choose_1, choose_2, choose_3, reveal_first, reveal_all, reset_game])
choose_2.click(lambda: [0, 1, 0], inputs=[], outputs=[choice_1, choice_2, choice_3])
choose_2.click(after_door_choice, inputs=[], outputs=[shuffle_doors, choose_1, choose_2, choose_3, reveal_first, reveal_all, reset_game])
choose_3.click(lambda: [0, 0, 1], inputs=[], outputs=[choice_1, choice_2, choice_3])
choose_3.click(after_door_choice, inputs=[], outputs=[shuffle_doors, choose_1, choose_2, choose_3, reveal_first, reveal_all, reset_game])
reveal_first.click(
# Disgusting but fun one-liner to reveal the first goat
lambda a, b, c, d, e, f: ["🐐" if i == [v if i != "1" else "" for i, v in zip([d, e, f], [a, b, c])].index("🐐") else "" for i in range(3)],
inputs=[door_1_, door_2_, door_3_, choice_1, choice_2, choice_3],
outputs=[door_1, door_2, door_3]
)
reveal_first.click(after_first_reveal, inputs=[], outputs=[shuffle_doors, choose_1, choose_2, choose_3, reveal_first, reveal_all, reset_game])
reveal_all.click(
lambda a, b, c: [a, b, c],
inputs=[door_1_, door_2_, door_3_],
outputs=[door_1, door_2, door_3]
)
reveal_all.click(
after_final_reveal,
inputs=[],
outputs=[shuffle_doors, choose_1, choose_2, choose_3, reveal_first, reveal_all, reset_game]
)
reset_game.click(
lambda: ["", "", ""],
inputs=[],
outputs=[door_1, door_2, door_3]
)
reset_game.click(
game_initial_state,
inputs=[],
outputs=[shuffle_doors, choose_1, choose_2, choose_3, reveal_first, reveal_all, reset_game]
)
gr.Markdown(
r"""
<br>
<details>
<summary>Show Solution</summary>
At first glance, the probability of winning the car seems to be 1/3 since there are three doors and only one of them hides the car.
However, the probability of winning the car changes according to whether you change your choice or not. Let's see how!
Let's call the event of winning the car after changing doors: `X`. Then:
$$P(X) = P(X | \text{πŸš—}) \times P(\text{πŸš—}) + P(X | \text{🐐}) \times P(\text{🐐})$$
When we change our door and we have initially chosen the car, we lose. So:
$$P(X | \text{πŸš—}) = 0$$
When we change our door and we have initially chosen a goat, we win. So:
$$P(X | \text{🐐}) = 1$$
There are two goats and one car, so:
$$P(\text{πŸš—}) = \frac{1}{3}$$
$$P(\text{🐐}) = \frac{2}{3}$$
Therefore:
$$P(X) = 0 \times \frac{1}{3} + 1 \times \frac{2}{3} = \frac{2}{3}$$
As a result, you have twice the chances of winning the car if you change your choice! πŸŽ‰
</details>
## Independence of Events
Now that we know how to compute the probability of an event given that another more or less related event has already occurred; it is fair to wonder about events that are completely unrelated.
Specifically, two events are said to be `independent` if the occurrence of one event does not affect the occurrence of the other. In other words, the probability of one event does not depend on the occurrence of the other.
Mathematically, two events `A` and `B` are `independent` if and only if:
$$P(A \cap B) = P(A) \times P(B)$$
One important thing to note is that `disjoint` events are not necessarily `independent`. In fact, most often than not (unless one event is impossible), when two events are disjoint they are actually dependent as:
$$P(A \cap B) = 0 \neq P(A) \times P(B)$$
Another interesting result is that the conditional probability of `A` given `B` is the same as the probability of `A` if `A` and `B` are independent:
$$P(A|B) = \frac{P(A \cap B)}{P(B)} = \frac{P(A) \times P(B)}{P(B)} = P(A)$$
The END.
"""
)
if __name__ == "__main__":
demo.launch()