That path to Ai: #5 Performance evaluation

That path to Ai: #5 Performance evaluation

Now that we covered a fair amount of the components and instruments involving in the AI systems, it is the time to talk a little about performance evaluation and its importance. Performance evaluation is an essential step in a methodological operation. Because after finishing the whole machine learning model, testing the accuracy is very important, it helps to see if the results are promising, and how accurate the model will be. To assess a machine learning performance, we have to utilise all around characterised parameters and experiences.


To compute the specific evaluation metrics, we have to use four significant parameters:

·True-positive (tp): The cases wherein we predicted YES, and the real output was likewise YES.

·False-positive (fp): The cases wherein we predicted NO, and the real output was likewise NO.

·True-negative (tn): The cases wherein we predicted YES, and the real output was NO.

·False-negative (fn): The cases wherein we predicted NO, and the real output was YES.

Since we have a clear thought of what is performance evaluation, how about we cover some of many machine learning evaluation metrics, for example, the following:

Precision: Precision, or the positive predictive value, is the quantity of the correct positive results and divided by the fraction of positive outcomes predicted by the classifier.

Recall: Recall, or the true positive rate, is the quantity of correct positive results divided by the amount of every single pertinent example (all samples that ought to be as positive).

F-Score: F1 Score is suitable to measure and test accuracy, the Harmonic Mean among precision and recall. F1 Score uses [0, 1] to measure. It reveals how exactly the classifier is (what number of instances it classifies accurately), just as how powerful it will be (it does not miss a critical number of cases). High exactness yet lower recall, it gives a fantastic accuracy, yet it at that pointcuts countless instances that are hard to classify. The more noteworthy the F1 Score, the better is the exhibition of our model. Mathematically, we can put it like this in this formula:


Accuracy: Accuracy is the ratio of the absolute number of correct predictions to the complete number of input samples. It runs well uncommonly if there is an equal number of models having a place with each class.

Confusion matrix: The Confusion Matrix as the name propose, it gives us a matrix as output and depicts the total performance of the model. It is a graphical representation, which implies to get the chance to comprehend the data effectively since visual information is continuously the choice for manual analysis.