This is the ConvNext based multiple-view model from the paper: https://arxiv.org/abs/2503.19945.

This model achieved the best (to my knowledge and by the time of the paper) AUC for Vindr-Mammo dataset.

This model achieves an AUC of 0.8511, considering two-views (CC and MLO) of each side of each exam.

To evaluate the performance of classifiers, we grouped the Bi-RADS categories into two broader classes: “Normal” for views classified as Bi-RADS 1 and 2, and “Abnormal” for views classified as Bi-RADS 3, 4, and 5.

Please find code and inference instructions here: https://github.com/dpetrini/multiple-view

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support