As described above, confusion matrix 900 indicates that the prediction model made a total of 72,000 predictions (e.g., 72,000 driver images were input into the predictive model). From of those 72,000 cases, the predictive model predicted “Yes” 63,000 times (column 904), and “No” 9,000 times (column 902). However, the actual values for the images differ, e.g., 62,000 images (row 908) should have been predicted as “Yes” (i.e., a driver behavior should have been found in the image), and 10,000 images (row 906) should have been predicted as “No” (i.e., a driver behavior should not have been found in the image). The confusion matrix 900 indicates how accurate the model was in making predictions. For example, True positives (912) represent the cases in which the model predicted “Yes” (driver behavior predicted), and the actual image does have driver behavior. True negatives (910) represent cases in which the model predicted “No,” and the actual image does not have driver behavior. False positives (916) represent cases where the model predicted “Yes,” but the actual image does not have driver behavior (e.g., also known as a “Type I error.”). Finally, false negatives (914) represent cases where the model predicted “No,” but where the actual image does have driver behavior. (e.g., also known as a “Type II error.”).