In pattern recognition information retrieval and binary classification, precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances, while recall (also known as sensitivity) is the fraction of relevant instances that have been retrieved over the total amount of relevant instances. Both precision and recall are therefore based on an understanding and measure of relevance.
If you want to make free money and have a blog like this one using our platform then sign up with this referral link of digital ocean platform if you don’t like money forget it, my friend.
Let me put it this way imagine you have something that returns X results for your search query. Precision means how many of those X are actually relevant for your query. Recall means how many of all the relevant results for that query did you actually return.
The terms true condition (positive outcome) and predicted condition (negative outcome) are used when discussing Confusion Matrices. This means that you need to understand the differences with Type I and Type II Errors.
- Type I Error (or False Positive) is a result that indicates that a given condition is present when it really is not present. For example, a drug test may come back positive even though the person tested has never had the drug. The reading of the test goes in the false positive box if the model predicts a yes and the true data shows no.
- Type II Error (or False Negative) is a result that indicates that a given condition is not present when it really is present. For example, a drug test may come back negative, while the person is indeed taking drugs. Did the model predict a no and the true data differs and read yes? Sad, but that reading goes into the false negative box.
This is an example in Python to understand the idea behind Precision and Recall.
Thanks for reading. If you loved this article, feel free to hit that subscribe button so we can stay in touch.