1.1 Baye’s Theorem
For mutually exclusive and exhaustive events \(E_i\), for \(1\le i\le n\) and an event E, we have the following formula for the probability of \(E_k\) given that event E has happened:
This is what is known as Baye’s Theorem or the theorem of inverse Probability.
Ex -1
A bag A contains 2 white balls and 3 red balls, while bag B contains 4 white balls and 5 red balls. One ball is drawn at random from one of the bags and it is found to be red. What is the probability that it was drawn from the second bag?
Solution:
Let’s denote by \(E_1\) the event that the ball was drawn from bag A, while \(E_2\) shall dente the event that the ball as drawn from the second bag. Also let \(E\) denote the event that the ball is red.
We have \(P(E_1)=P(E_2)=\frac{1}{2}\). Also \(P(E|E_1)=\frac{3}{5}\) and \(P(E|E_2)=\frac{5}{9}\).
Thus the required probability :
\(P(E_2|E)=\frac{P(E_2)P(E|E_2)}{P(E_1)P(E|E_1)+P(E_2)P(E|E_2)}=\frac{\frac{1}{2}\times \frac{5}{9}}{\frac{1}{2}\times \frac{3}{5}+\frac{1}{2}\times\frac{5}{9}}=\frac{25}{32}\).
1.2 Introduction to Random Variables and Expectation
A random variable, usually written X, is a variable whose possible values are numerical outcomes of a random phenomenon. There are two types of random variables, discrete and continuous. Generally speaking, a random variable just denotes the outcome of an experiment by a numerical value. Often, a random variable assumes discrete values (countable), like 0,1,2,3… and these random variables are called discrete random variables. In case, the random variable assumes an uncountable set of values as its domain, it’s then called a continuous random variable. For a random variable, we assign a probability to eacb of its possible set of values, which is called the probability distribution of the random variable. It is also sometimes called the probability function or the probability mass function.
We’ll mostly talk about discrete random variables over here. We’ll now introduce another term called ‘Expectation’ of a random variable. Expectation would mean, that in general what should be the expected value of the random variable. In experimental terms, if the experiment is conducted in a random environment, what can be the expected outcome? Now importantly, the expected value of the random variable that we may get, may or may not be one of its specific values. For example, if we think about the rolling of a dice as an experiment, then the expected value of the dice, when it’s rolled in a random setup is 3.5. Which means, we should expect somewhere between 3 and 4 if the dice is rolled on an average.
Mathematically speaking, we denote the expected value of a random variable as follows:
\(E(X)=\sum\limits_{x}xP(X=x)\). This sum, calculated over all the set of values that the random variable X can assume.
We denote the Probability Mass Function of a random variable \(X\) by \(f(x)=P(X=x)\). Which basically translates the expectation calculations to \(E(X)=\sum\limits_{x}xf(x)\).
Consider the rolling of a dice. Here \(X=1,2,3,4,5,6\). And \(P(X=1)=f(1)=\frac{1}{6}\), similarly for \(X=2,3,4,5,6\).
Thus \(E(X)=\sum\limits_{x=1}^{6}xf(x)=\sum\limits_{x=1}^{6}x\times\frac{1}{6}=\frac{1}{6}\times\frac{6\times 7}{2}=\frac{21}{6}=3.5\).
This Expectation is also refereed to as the mean of the random variable X.