How accurate is your AI system?

Evaluating AI applicability.

AI evaluation method
The new AI evaluation method looks at the input data itself to find it the 'accuracy' of the AI can be trusted

AI use is continuously expanding. One similar trail is done by  J. B. Brown of the Graduate School of Medicine. He has developed an AI system that predicts yes/positive/true or no/negative/false answers.

Brown deconstructs the usage of AI and investigates the idea of the insights used to report an AI program’s capacity. The system generates a probability of the performance level given evaluation data, answering questions such as: What is the probability of achieving accuracy greater than 90%?

In typical AI development, the evaluation can only be trusted if there is an equal number of positive and negative results meet the number of positive and negative outcomes. On the off chance that the information is biased toward either esteem, the present arrangement of assessment will misrepresent the system’s capacity.

This new system tackles this problem via evaluating performance based on only the input data itself.

Brown said, “The novelty of this technique is that it doesn’t depend on any one type of AI technology, such as deep learning. It can help develop new evaluation metrics by looking at how a metric interplays with the balance in predicted data. We can then tell if the resulting metrics could be biased.”

“This analysis will not only raise awareness of how we think about AI in the future but also that it contributes to the development of more robust AI platforms.”

In addition to the accuracy metric, Brown tested six other metrics in both theoretical and applied scenarios, finding that no single metric was universally superior. He says the key to building useful AI platforms is to take a multi-metric view of evaluation.

Brown’s program is freely available to the general public, researchers, and developers. Brown’s paper, published in Molecular Informatics.