Abstract

A friend of ours asks for a vacation credit card to his bank, to discover that the credit he is offered is surprisingly low. The bank teller cannot explain why. Our stubborn friend continues his quest for explanation up to the bank executives, to discover that it was an algorithm that automatically lowered his credit score. Why? After a long, ad-hoc investigation, it turns out that the cause is ... bad credit by the former owner of our friend's house.

Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for the lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions.

The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity and understanding. Explainable AI addresses such challenges and for years different AI communities have studied such topic, leading to different definitions, evaluation protocols, motivations, and results.

My talk focuses on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML systems. First, it will motivate the needs of Explainable AI in real-world and large-scale applications, then will present state-of-the-art techniques and best practices getting into the specifics of the approaches and the research challenges for the next steps.