- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Meeting room: https://l.infn.it/mlinfn-room
Minute: https://l.infn.it/mlinfn-minutes
A key choice in Deep Learning applications is the selection of appropriate loss functions to minimize and the evaluation metrics to assess model performance. In general, it is hard to define a one-fits-all option for those quantities, which makes it paramount to select the candidate that best aligns with the scope of our application and copes with its corresponding challenges.
In this work, we investigate the impact of loss functions and evaluation metrics on model performance and generalization in the context of cell recognition.
In particular, we try to address common challenges of segmentation problems such as class imbalance, overcrowding and noisy labels comparing the impact of several loss functions, namely BCE, Dice, Focal, Focal-Tversky loss functions. Additionally, we also include two combined versions obtained as weighted sums of the above losses, aiming at combining their individual strengths.
The assessment is performed across multiple dimensions, scrutinizing segmentation, detection and counting performance.
Finally, we explore how data characteristics such as object size, color irregularities and textures may impact model generalization to new data.
Collectively, these insights provide an understanding of the various facets resulting from the application of Deep Learning for automatic cell segmentation, shedding light on best practices, evaluation strategies, and model generalization.
Resources
Talk: https://physicsmeetsai.github.io/beyond-vision/assets/pdf/slides/3_slides.pdf
Data: https://arxiv.org/pdf/2307.14243.pdf
GitHub: https://github.com/clissa/fluocells-BVPAI