Researchers evaluated foundation models as feature extractors in weakly supervised computational pathology and reported scalable performance on whole‑slide image tasks. Published in Nature Biomedical Engineering, the study assessed transfer learning from large pre‑trained models to pathology endpoints where pixel‑level labels are scarce. The team demonstrated that foundation‑model features combined with weak supervision can match or outperform bespoke architectures trained on smaller labeled sets. The work highlights potential to reduce annotation overhead and accelerate development of diagnostic algorithms. Pathology labs, AI vendors and regulators will need to address model interpretability, validation across scanners and staining protocols, and pathways for clinical deployment.