Chia-Hsiu CHEN(D3), Department of Chemical System Engineering, received Best Student Presentation Award in 41st Chemoinformatics symposium


On 27th October 2018, Chia-Hsiu CHEN(D3), Department of Chemical System Engineering, received Best Student Presentation Award in 41st Chemoinformatics symposium. It was given for the superior presentation by student at this symposium.



<About awarded research>
Title:"How can we trust QSPR models?": Ideas on building interpretable machine learning methods

Abstract:In the chemical industry, designing novel compounds with desired characteristics is a bottleneck in the chemical manufacturing development. Quantitative structure – property relationship (QSPR) modeling with machine learning techniques can move the chemical design forward to work more efficiently and create better results. A challenge associated with current QSPR models is the lack of interpretability operating black-box models. Hence, interpretable machine learning methods will be essential for researchers to understand, trust, and effectively manage a QSPR model. Global interpretability and local interpretability are two typical ways to define the scope of model interpretation. Global interpretation is information on structure−property relationships for a series of compounds, helping shed some light on mechanisms of action, activity, property of compounds. Local interpretability gives information about how different parts of a single compound influence the property and can be applied to identify structural motifs which reduce or enhance the property. Global interpretability is possible to achieve knowledge mining helping researchers derive hypotheses to accelerate fundamental researches. Local interpretability provides more specific and useful information about the further structural optimization to improve the property as opposed to global interpretability. In this presentation, we focus on the designs of interpretable frameworks for typical trustworthy machine learning models. Two different approaches based on ensemble learning and deep learning to interpretable models will be presented to achieve global interpretation and local interpretation respectively which. Proposed interpretable models using ensemble learning and deep learning as interpretable frameworks are equal to or even better than typical trustworthy models. We believe that trust in QSPR models can be enhanced by interpretable machine learning methods that conform to human knowledge and expectations.



I am very pleased and grateful to accept the award. I would like to thank my advisor, prof. Funatsu, for providing me an opportunity to do this work and giving all support and guidance. I am also grateful to members of the laboratory. This accomplishment would not have been possible without them.


Laboratory of Chemoinformatics: