Project

 

DeepLever – Deep Learner Explanation & VERification

 

The promise of AI & ML. The present and expected impact of Artificial Intelligence (AI) cannot be overstated. As recently argued by Andrew Ng (a leading researcher in the area of Machine Learning) : “AI is the electricity of the 21st century.” This simple statement exemplifies well the level of expectations around AI, and more concretely around the field of Machine Learning (ML). A cornerstone of ML is deep learning which, over the last few years, has achieved above human-level performance in image and speech recognition, among other examples. These achievements have contributed to a widespread belief that ML is posed to revolutionize our lives.

Despite the excitement around ML, there are important obstacles to the deployment of ML technology, some of which often go unnoticed. These obstacles are particularly acute in safety critical settings. In the case of deep neural networks (NNs), one well-known difficulty is the so-called brittleness of these ML models. Under the name of adversarial examples, over the last few years researchers have shown that it is reasonably easy to cause deep learning ML models to mispredict, in ways that can easily be considered critical for the safety of humans. (A well-known example is to mispredict a road stop sign as a fairly generous speed limit, this when for a human the image is obviously a stop sign. This is just one among a growing number of similarly unsettling examples.)

The main thesis of the DeepLever project is that a solution to address the challenges faced by ML models is at the intersection of formal methods (FM) and AI. (A recent Summit on Machine Learning Meets Formal Methods offered supporting evidence to how strategic this topic is.) The DeepLever project envisions two main lines of research, concretely explanation and verification of deep ML models, supported by existing and novel constraint reasoning technologies.

Representative Technologies. Automated reasoners based on SAT, SMT, CP, ILP solvers (among others) have witnessed remarkable progress over the last two decades, and performance breakthroughs in different applications domains continue to be observed at an ever growing pace. The reasons for these successes include improved solver technology, more sophisticated encodings, and also by exploiting key concepts that include abstraction refinement, symmetry identification and breaking among others.

Explaining ML Models. One vector of research will be eXplainable AI (XAI). One way to validate the results of ML models is to be able to provide succinct explainations for the predictions made. These explanations can either be recorded for later analysis, or can be used by human decision makers to understand and accept the predictions of ML models. Over the last year, my team developed solutions for computing small-size decision trees and sets, thus offering guarantees in terms of the sizes of computed explanations. (This work was presented in 2018 at the top tier IJCAI and IJCAR conferences.) More recently, my team has exploited abduction to develop a general purpose solution for computing explanations for essentially any ML model, although focusing on neural network models. (This work will be presented at the top tier AAAI 2019 conference.) Although there is promise to this initial work, much more needs to be done. Explanations for other ML models need to be devised, concretely for ML models with widespread practical adoption. Furthermore, a fundamental limitation of exploiting formal methods in XAI is scalability. Developing XAI solutions for the deep neural networks of the future will require devising new encodings, new algorithms and new optimizations to these algorithms, but also finding new ways to parallelize reasoners.


Verifying ML Models. Another prime vector of research will be the verification of ML models. The issue of NNs being brittle has motivated a fast growing effort on verifying deep NNs. Existing approaches that scale to large deep NNs are not exact, being characterized as incomplete or as unsound. In turn, this reveals a fundamental limitation of existing constraint-based reasoning technologies to tackle the sheer size of deep NNs (an often-seen number is that deployed deep NN models will have in the order of 107 neurons). One of the proposed lines of research will be to devise a new generation of constraint-based reasoners capable of analyzing deep NNs. This will involve revisiting encodings, devising mechanisms for identifying and breaking problem symmetries, inventing new ways for exploiting abstraction, among other well-known techniques for tackling the complexity of reasoning. Furthermore, the large size of deep NNs suggests devising ways to parallelize algorithms, but also mechanisms for partitioning the problem representations.


ANITI Alignment. DeepLever intersects the ANITI Core as well as most of the ANITI integrative programs, and primarly Acceptable AI and AI for autonomous critical systems. DeepLever also intersects most of the application areas, and primarily Transport/Mobility, Health and Industry 4.0.