A University of Warwick study published in Nature Biomedical Engineering found that AI tools for cancer pathology are prone to “shortcut learning,” where models rely on spurious correlations in training data rather than underlying biological signals. The investigators showed that without careful dataset curation and validation, models can perform well on internal benchmarks while failing to generalize clinically. The paper calls for stronger standards in dataset design, external validation and reporting to ensure AI tools identify true pathophysiologic features. Authors demonstrated examples where confounders—such as slide artifacts or site‑specific staining—drove model predictions. The findings raise caution for commercial and clinical deployment of pathology AI and argue regulators, journals and developers should demand provenance and mechanistic explainability before clinical adoption.