Riunione Settimanale ML_INFN

Europe/Rome
    • 16:00 16:30
      Discussione hackathon base 30m
      Speaker: Francesca Lizzi (Istituto Nazionale di Fisica Nucleare)
    • 16:30 17:30
      Ante-hoc explainability methods: the ProtoPNet architecture and its application on DBT images 1h

      Deep learning models have become state-of-the-art in many areas, from computer vision to agriculture research. However, concerns have been raised regarding the transparency of their decisions, especially in sensitive fields such as medicine. In this regard, in recent years, Explainable Artificial Intelligence has been gaining popularity. Post-hoc explainability methods try to explain an already-existing black-box model. Ante-hoc explainability methods, on the other hand, avoid using black-box models in the first place, building, instead, inherently transparent models that provide both a prediction and the corresponding explanation. The ProtoPNet model, an innovative ante-hoc explainability method that breaks down an image into prototypes and uses evidence gathered from the prototypes to classify an image, represents an appealing approach. In our work, we explore the architecture of ProtoPNet and the applicability of such a model to medical images. Specifically, first, we study the performance and quality of explanations when applied to mass classification from mammogram images, and second, we investigate the development of an inherently explainable unified approach for mass detection and classification in Digital Breast Tomosynthesis, a novel breast imaging modality providing substantial advantages over classical mammogram images.

      Speaker: Andrea Berti (Istituto Nazionale di Fisica Nucleare)