Researchers at MIT applied sparse autoencoders to protein language models (PLMs), making these complex AI tools more interpretable by revealing underlying patterns related to protein families and functions from single amino acid sequences. This enhanced transparency can improve scientists’ trust in PLM predictions, facilitating applications in protein science where understanding structure–function relationships is crucial. The findings were published in PNAS.