ChemistrySelect, cilt.10, sa.30, 2025 (SCI-Expanded)
The Hammett constants ((Formula presented.)) describe the electron-withdrawing and electron-donating effects of substituents in aromatic compounds and are widely used in structure–activity relationship studies. However, their experimental determination is resource-intensive and time-consuming. Although graph neural networks (GNNs), such as GCN and Weave, have been proposed for predicting Hammett constants using graph-based features, they suffer from poor interpretability. To address limited interpretability, we introduce Inter-Hammett, a framework designed to enhance interpretability while maintaining high predictive performance. Inter-Hammett leverages cheminformatics-derived descriptors from RDKit, Mordred, PyBioMed, and CDK, followed by rigorous AutoGluon-based feature selection to mitigate the curse of dimensionality. The model core is trained using RuleFit on 85% of the dataset, ensuring a balance between accuracy and interpretability. On unseen data, Inter-Hammett achieved an R2 of 0.880 and an RMSE of 0.128, outperforming eleven models, including four recently published state-of-the-art deep learning approaches. Additionally, a comprehensive interpretability analysis using seven different methods further enhances transparency, making Inter-Hammett a robust alternative for Hammett's constant prediction.