Explainable AI refers to the development of intelligent systems able to provide high-quality, transparent solutions.
Explainable AI refers to the development of intelligent systems able to provide high-quality, transparent solutions. These solutions provide an introspection mechanism to better understand how particular outcomes have been obtained. This feature plays a pivotal role in Business Intelligence: companies are less likely to implement automated intelligent solutions that neither stakeholders nor customers can understand. The acceptance of an intelligent model is not only a matter of trust but also a matter of ethics and legality. Governmental agencies have moved to legislate the use of machine learning and increasingly require a level of insight and transparency into decision-making processes. As a result, the focus of machine learning is slightly shifting from pursuing more accurate models to improving the explanations behind those models.
While there is an increasing trend in developing post-hoc procedures to understand how black-box models operate, our research group has focused on developing inherently interpretable intelligent systems such as Fuzzy Cognitive Maps. Our intelligent systems are not just capable of computing solutions with high levels of accuracy but of reasoning on the basis of expert knowledge, which often results in more realistic solutions. This means that our models are designed to reason together with human beings while being enhanced with available data records.
As a part of our research efforts, we have developed intelligent systems for classifying patterns, analyzing time series, discovering segments and communities, reasoning with symbolic knowledge structures, among others. These solutions help stakeholders find relevant patterns in the data, which can be translated into more effective data-oriented decision-making processes.
Questions about this research line or seeing the potential to collaborate? Please feel free to contact prof. dr. Koen Vanhoof.