Format

Send to

Choose Destination
J Med Chem. 2019 Sep 26. doi: 10.1021/acs.jmedchem.9b01101. [Epub ahead of print]

Interpretation of Compound Activity Predictions from Complex Machine Learning Models Using Local Approximations and Shapley Values.

Author information

1
Department of Life Science Informatics, B-IT, LIMES Program Unit Chemical Biology and Medicinal Chemistry , Rheinische Friedrich-Wilhelms-Universität , Endenicher Allee 19c , D-53115 Bonn , Germany.
2
Department of Medicinal Chemistry , Boehringer Ingelheim Pharma GmbH & Co. KG , Birkendorfer Straße 65 , 88397 Biberach an der Riß , Germany.

Abstract

In qualitative or quantitative studies of structure-activity relationships (SARs), machine learning (ML) models are trained to recognize structural patterns that differentiate between active and inactive compounds. Understanding model decisions is challenging but of critical importance to guide compound design. Moreover, the interpretation of ML results provides an additional level of model validation based on expert knowledge. A number of complex ML approaches, especially deep learning (DL) architectures, have distinctive black-box character. Herein, a locally interpretable explanatory method termed Shapley additive explanations (SHAP) is introduced for rationalizing activity predictions of any ML algorithm, regardless of its complexity. Models resulting from random forest (RF), nonlinear support vector machine (SVM), and deep neural network (DNN) learning are interpreted, and structural patterns determining the predicted probability of activity are identified and mapped onto test compounds. The results indicate that SHAP has high potential for rationalizing predictions of complex ML models.

Supplemental Content

Full text links

Icon for American Chemical Society
Loading ...
Support Center