Today’s day one as I share what I’m learning about! Probability is something I haven’t really thought about since grade school. For this note, we’re going to be learning a bit more about applied probability in plain English. Lately, I’ve learned that using randomness can be incredibly useful in designing efficient algorithms. When I think about algorithms, I think of a series of steps needed to achieve a goal. There’s no inherent randomness in that. Turns out though, that randomness can help make things more efficient. That’s where probability comes in, and it can help us find how likely a certain random scenario is.
What I’ve learned is that there are a couple of fundamental concepts that are necessary to understand in order to be able to look at an algorithm that has randomness in it and be able to find its efficiency.
First there’s the sample space, which includes all the outcomes that could happen. The classic example is a dice roll. If we role a single dice, then the outcome could be any number from 1 to 7. Next, are events which is an outcome that comes from the sample space. An event that could happen is that if we roll the dice, we MAY get a number less than 3. What’s the probability of that event? Well since there’s 7 possible outcomes and 3 of them are included in our event, the probability of it is 3/7! After that, I learned about random variables. This is a value that appears randomly that relates to the problem at hand. Using our dice example, if we roll our dice whatever value that pops up is a random variable.
Our second to last concept to understand is expectation. All that means is what do we expect our random variable to be, what is it’s expected value? The expectation is a weighted average. What’s the average value of our dice? To find that out, we must multiply each value by the probability that it’ll occur and add each of these possibilities together. So to figure out the expected value of our dice, we must find out the values and the probability of each value. In a perfect dice, the probability of each value would be ⅙. That means we’d do the following calculation: 1(⅙) 2(⅙) 3(⅙) 4(⅙) 5(⅙) 6(⅙) = 3.5. That means that the expected value of our dice is going to average out at 3.5!
These 4 initial concepts lead us to our last and most important one: the linearity of expectation. It sounds like a mouthful and this is definitely the one that took me the longest to fully grasp (and to be honest, I’m still trying to understand its implications). This is one we’ve been building up to. Let’s imagine we want to find the expected value if I roll two dice instead of one. We know that the expected value of one dice roll is 3.5 so due to the linearity of expectation, then I know that the expected value of the role of two dice is 7! Why? I can add the expected value of one dice roll to the expected value of another dice roll (i.e. 3.5 3.5) and I get 7! If I didn’t know that, and I wanted to find the expected value of two dice rolls it would look something like this: (1 1)(1/7) (1 2)(1/7) …. etc for each value. Honestly it’s so much work I don’t even want to show it all! The linearity of expectation makes things so much easier. In essence, the linearity of expectation states that the expected value of an event, let’s call it Z, will be the same as if we added up the expected values of each individual event which makes up Z.
There we go! We did it! Those are the main concepts I recently learned about probability. These facts help us get one step closer to being able to analyze algorithms that involve probability and to better understand the world around us. Now, I’m still learning too, so if you see anything that needs to be corrected, my contact info is at the bottom of every page. I hope you’ve enjoyed this taste of probability and are excited for what’s coming up!