- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Meeting room: https://l.infn.it/mlinfn-room
Minute: https://l.infn.it/mlinfn-minutes
Deep learning models have become state-of-the-art in many areas, from computer vision to agriculture research. However, concerns have been raised regarding the transparency of their decisions, especially in sensitive fields such as medicine. In this regard, in recent years, Explainable Artificial Intelligence has been gaining popularity. Post-hoc explainability methods try to explain an already-existing black-box model. Ante-hoc explainability methods, on the other hand, avoid using black-box models in the first place, building, instead, inherently transparent models that provide both a prediction and the corresponding explanation. The ProtoPNet model, an innovative ante-hoc explainability method that breaks down an image into prototypes and uses evidence gathered from the prototypes to classify an image, represents an appealing approach. In our work, we explore the architecture of ProtoPNet and the applicability of such a model to medical images. Specifically, first, we study the performance and quality of explanations when applied to mass classification from mammogram images, and second, we investigate the development of an inherently explainable unified approach for mass detection and classification in Digital Breast Tomosynthesis, a novel breast imaging modality providing substantial advantages over classical mammogram images.