Visualizing a Trained Neural Network

John MelonakosArrayFire, Case Studies Leave a Comment

Researchers from the University Bordeaux in France credit ArrayFire in a paper published in ICPR 2020’s workshop on Explainable Deep Learning for AI. The paper is titled “Samples Classification Analysis Across DNN Layers with Fractal Curves.” It provides a tool for visualizing where the deep neural network starts to be able to discriminate the classes.


Summary

Deep neural networks (DNN) are becoming the prominent solution when using machine learning models. However, they suffer from a black-box effect that complicates their inner workings interpretation and thus the understanding of their successes and failures. Information visualization is one way, among others, to help in their interpretability and hypothesis deduction. This paper presents a novel way to visualize a
trained DNN to depict at the same time its architecture and its way of treating the classes of a test dataset at the layer level. In this way, it is possible to visually detect where the DNN starts to be able to discriminate the classes or where it could decrease its separation ability (and thus detect an oversized network). The researchers have implemented and validated the approach using several well-known datasets and networks. Results show the approach is promising and deserves further studies.

The Algorithm

Figure 2 from the paper shows the schematic flow of the algorithm.

The Results

Figure 3 from the paper shows example results on a few datasets. The different colors represent different classes into which the image is progressively classified.

Conclusion

Deep learning classifiers are progressively replacing handcrafted and understood standard classifi ers for various fields. This performance gain is counterbalanced by the difficulty in understanding how and why they perform well. Information visualization is one solution to this lack of interpretability. This paper presents a pipeline consuming a trained network and a dataset and producing an interactive representation depicting the network’s architecture and the behaviors of the test samples at each layer. Such a system allows us to visually analyze the classification quality over layers of a dataset. It could be used to visually detect patterns in the data and propose a hypothesis about the network’s performance.


Thanks to these researchers for sharing their great work with us!

Leave a Reply

Your email address will not be published. Required fields are marked *