1.1 An Introduction to Probability
In basic mathematical terminology, probability of an event is described as the ratio of the favourable chances of that event to take place, and the total number of chances. This ratio, is a number between 0 and 1. In ideal cases, a probability of an event being 1 will refer to the event being a ‘sure event’, i.e. there is no way that the event cannot happen. Similarly, the probability of an event being 0, refers to an ‘impossible event‘, i.e. there is absolutely no way for the event to occur.
In all real life scenarios, the probability of an event will be a number between 0 and 1. In almost all cases, these events would be the result of an experiment, and more often than not, these experiments may have multiple outcomes – i.e. one cannot predict the outcome of the experiment. These events, would be the respective outcomes of these experiments. Such experiments, in probabilistic terms, are referred to as ‘Random Experiments’.
We have defined the probability of an event to be a ratio. Thus, we can say that the probability of the experiment to have a particular outcome, is the ratio between the number of scenarios that result in the outcome, and the total number of scenarios possible from the experiment. This set of all possible results, or scenarios of an experiment is called the ‘Sample Space‘ of that experiment.
Consider a simple coin tossing problem – if we toss a coin in air, what is the probability of the toss to result in a Tails? Let’s analyse the Sample Space for this problem. There can be only 2 possible scenarios, or results of this experiment (tossing a coin) – Heads (H) or Tails (T). Thus the Sample space = {H,T}. We need to find the probability of Tails, so the number of results favourable to this is 1 – i.e if the toss results in a Tail. Thus the probability is = \(\frac{1}{2}\).
If we were to toss it twice, and find the probability of the result of both the tosses to be Tails, then we need to draw the sample space as {HH,HT,TH,TT}. We need the probability for TT, thus the probability is \(\frac{1}{4}\).
1.2 Events Terminology
a) Equally Likely Events – Events whose probability are same in an experiment.
b) Independent Events – The occurrence of one event does not depend on the occurrence of the other.
c) Mutually Exclusive Events – Probability of both the events happening simultaneously is zero.
d) Complimentary Events – Events that are compliments of each other. For example, if raining is an event, then the sun shining is a complimentary event. If the result of a coin toss being Heads is an event, then the result being Tails is its compliment. If you combine complimentary events, you get the whole sample space of the experiment. Also, complimentary events are mutually exclusive in nature.
e) Exhaustive Events – A set of events is said to be exhaustive if the result of the experiment is at least one of those elements.
Ex -1)
If a leap year is chosen at random, what is the probability that it will have 53 Sundays?
Solution:
If a leap year has 366 days, that means it has 52 weeks and 2 more days. We need to find the probability that a Sunday lies in this set of 2 extra days.
How many set of 2-days can we have in a week? Let’s count: Sunday-Monday, Monday-Tuesday, Tuesday-Wednesday, Wednesday-Thursday, Thursday-Friday, Friday-Saturday, Saturday-Sunday. 7 such 2-days we have. Out of these, the favourable ones for our event are – Saturday-Sunday and Sunday-Monday. Thus the probability is \(\frac{2}{7}\).
We have already introduced the notion of sample spaces. Let’s consider an event \(A\) in the sample space of some experiment. IF we remove \(A\) from the sample space, then whatever remains in the remaining sample space is the compliment of \(A\), denoted by \(A^C\). Thus, \(P(A)+P(A^C)=P(W)=1\), where \(W\) is the sample space of the experiment.
1.3 Some Results and Notations
\(P(A\bigcup B)\) indicates the probability of occurrence of at least one of the events A and B. Mathematically it’s represented as
\(P(A\bigcup B)=P(A)+P(B)-P(A\bigcap B)\), where \(P(A\bigcap B)\) denotes the probability of occurrence of both A and B.
Similarly \(P(A-B)\) denotes the probability of occurrence of A and not B.
For mutually exclusive events, \(P(A\bigcap B)=\phi\) and for independent events \(P(A\bigcap B)=P(A)\).
We can extend this to as many events as we want, and thus we get the Addition theorem of Probability :
\(P\left ( \bigcup\limits_{i=1}^n A_i \right )=\sum\limits_{i=1}^nP(A_i)-\sum\limits_{i<j}P(A_i\bigcap A_j)+\sum\limits_{i<j<k}P(A_i\bigcap A_j\bigcap A_k)-…+(-1)^{n-1}P(A_1\bigcap A_2\bigcap…\bigcap A_n)\).
For mutually exclusive events \(A_i\) for all \(i\), we have
\(P\left ( \bigcup\limits_{i=1}^n A_i \right )=\sum\limits_{i=1}^nP(A_i)\).
2.1 Conditional Probability and Compound Probability
Conditional probability of an event E, deals with calculation of the probability of E, given that some other event \(E_1\) has already occurred. We denote the probability of \(E\) happening given that \(E_1\) has already happened by the notation \(P(E|E_1)\).
Mathematically we have
\(P(E|E_1)=\frac{P(E\bigcap E_1)}{P(E_1)}\).
This also gives us \(P(E\bigcap E_1)=P(E|E_1)P(E_1)\).
Also, if \(E_1,E_2\) be 2 events, neither with null probabilities, then we have
\(P(E_1|E_2)+P(E_1^C|E_2)=1\).
Events that occur together are called Compound Events – more specifically, we denote this joint occurrence of multiple events by the term ‘Compound Events’. For mutually exclusive and exhaustive events \(E_i\), with \(1\le i\le n\), and an event \(E\), we have:
\(P(E)=\sum\limits_{i=1}^nP(E\bigcap E_i)=\sum\limits_{i=1}^nP(E_i)P(E|E_i)\)