Product Description:
-
Paper quality= 70 gsm offwhite (Excellent)
-
Cover quality= 260 gsm card
-
Digitally printed, with excellent print and paper quality
Book Synopsis:
This book covers a range of interpretability methods, from inherently interpretable models to methods that can make any model interpretable, such as SHAP, LIME and permutation feature importance. It also includes interpretation methods specific to deep neural networks, and discusses why interpretability is important in machine learning. All interpretation methods are explained in depth and discussed critically. How do they work under the hood? What are their strengths and weaknesses? How can their outputs be interpreted?
"What I love about this book is that it starts with the big picture instead of diving immediately into the nitty gritty of the methods (although all of that is there, too)."
– Andrea Farnham, Researcher at Swiss Tropical and Public Health Institute
Who the book is for
This book is essential for machine learning practitioners, data scientists, statisticians, and anyone interested in making their machine learning models interpretable. It will help readers select and apply the appropriate interpretation method for their specific project.
"This one has been a life saver for me to interpret models. ALE plots are just too good!"
– Sai Teja Pasul, Data Scientist at Kohl's
You'll learn about
- The concepts of machine leaning interpretability
- Inherently interpretable models
- Methods to make any machine model interpretable, such as SHAP, LIME and permutation feature importance
- Interpretation methods specific to deep neural networks
- Why interpretability is important and what's behind this concept
About the author
The author, Christoph Molnar, is an expert in machine learning and statistics, with a Ph.D. in interpretable machine learning.
Outline
- Summary
- 1 Preface by the Author
- 2 Introduction
- 3 Interpretability
- 4 Datasets
-
5 Interpretable Models
- 5.1 Linear Regression
- 5.2 Logistic Regression
- 5.3 GLM, GAM and more
- 5.4 Decision Tree
- 5.5 Decision Rules
- 5.6 RuleFit
- 5.7 Other Interpretable Models
- 6 Model-Agnostic Methods
- 7 Example-Based Explanations
-
8 Global Model-Agnostic Methods
- 8.1 Partial Dependence Plot (PDP)
- 8.2 Accumulated Local Effects (ALE) Plot
- 8.3.1 Feature Interaction
- 8.4 Functional Decompositon
- 8.5 Permutation Feature Importance
- 8.6 Global Surrogate
- 8.7 Prototypes and Criticisms
-
9 Local Model-Agnostic Methods
- 9.1 Individual Conditional Expectation (ICE)
- 9.2 Local Surrogate (LIME)
- 9.3 Counterfactual Explanations
- 9.4 Scoped Rules (Anchors)
- 9.5 Shapley Values
- 9.6 SHAP (SHapley Additive exPlanations)
-
10 Neural Network Interpretation
- 10.1 Learned Features
- 10.2 Pixel Attribution (Saliency Maps)
- 10.3 Detecting Concepts
- 10.4 Adversarial Examples
- 10.5 Influential Instances
- 11 A Look into the Crystal Ball