Silver BlogInterpretable Machine Learning: The Free eBook

Interested in learning more about interpretability in machine learning? Check out this free eBook to learn about the basics, simple interpretable models, and strategies for interpreting more complex black box models.



Interpretable machine learning is a genuine concern to stakeholders across the domain. No longer an esoteric consternation, or a "nice to have" for practitioners, the importance of interpretable machine learning and AI has been made known to more and more people over the past number of years for a wide array of different reasons.

All of this could leave one wondering: where does one go to find a cache of quality reading material for learning such an important issue? Enter Interpretable Machine Learning, a free eBook by Christoph Molnar.

Image
 
First, what is the motivation for the book? The following comes directly from the book itself:
 

Machine learning has great potential for improving products, processes and research. But computers usually do not explain their predictions which is a barrier to the adoption of machine learning. This book is about making machine learning models and their decisions interpretable.

 

Molnar goes on to say in the book's preface:
 

Given the success of machine learning and the importance of interpretability, I expected that there would be tons of books and tutorials on this topic. But I only found the relevant research papers and a few blog posts scattered around the internet, but nothing with a good overview. No books, no tutorials, no overview papers, nothing. This gap inspired me to start this book.

 

The book is described as progressing as follows:
 

After exploring the concepts of interpretability, you will learn about simple, interpretable models such as decision trees, decision rules and linear regression. Later chapters focus on general model-agnostic methods for interpreting black box models like feature importance and accumulated local effects and explaining individual predictions with Shapley values and LIME.

 

Recall that Molnar is a data scientist and PhD candidate in interpretable machine learning, meaning that you should be able to rest assured that this won't be a collection of outdated or marginal ideas related to the subject. Instead, expect the distilled expertise of someone who is clearly invested in and passionate about the topic, and has been studying it in-depth.

The book's table of contents are as follows:

  1. Introduction
  2. Interpretability
  3. Datasets
  4. interpretable Models
  5. Model-Agnostic Methods
  6. Example-Based Explanations
  7. Neural Network Interpretation
  8. A Look Into The Crystal Ball

 

If you don't have the time or interest to read from cover to cover, Molnar says:
 

[Y]ou can jump back and forth and concentrate on the techniques that interest you most. I only recommend that you start with the introduction and the chapter on interpretability. Most chapters follow a similar structure and focus on one interpretation method.

 

And what type of data being modeled does this book center on?
 

The book focuses on machine learning models for tabular data (also called relational or structured data) and less on computer vision and natural language processing tasks. Reading the book is recommended for machine learning practitioners, data scientists, statisticians, and anyone else interested in making machine learning models interpretable.

 

The book is being continuously updated ("similar to how software is developed"), and Molnar invites others to contribute as well. Fix and update suggestions can be made as pull requests via the book's GitHub.

For those interested, the book can also be purchased for a reasonable in PDF and eBook form on leanpub.com, while a print version can be found on lulu.com.

 
Related: