Christoph Molnar
Goodreads Author
Website
Twitter
Genre
Member Since
June 2016
To ask
Christoph Molnar
questions,
please sign up.
More books by Christoph Molnar…
Christoph Molnar hasn't written any blog posts yet.
“Explanations are selected. People do not expect explanations that cover the actual and complete list of causes of an event. We are used to selecting one or two causes from a variety of possible causes as THE explanation.”
― Interpretable Machine Learning: A Guide For Making Black Box Models Explainable
― Interpretable Machine Learning: A Guide For Making Black Box Models Explainable
“The computation of partial dependence plots is intuitive: The partial dependence function at a particular feature value represents the average prediction if we force all data points to assume that feature value. In my experience, lay people usually understand the idea of PDPs quickly. If the feature for which you computed the PDP is not correlated with the other features, then the PDPs perfectly represent how the feature influences the prediction on average. In the uncorrelated case, the interpretation is clear: The partial dependence plot shows how the average prediction in your dataset changes when the j-th feature is changed. It is more complicated when features are correlated, see also disadvantages. Partial dependence plots are easy to implement. The calculation for the partial dependence plots has a causal interpretation. We intervene on a feature and measure the changes in the predictions. In doing so, we analyze the causal relationship between the feature and the prediction.3 The relationship is causal for the model – because we explicitly model the outcome as a function of the features – but not necessarily for the real world!”
― Interpretable Machine Learning: A Guide For Making Black Box Models Explainable
― Interpretable Machine Learning: A Guide For Making Black Box Models Explainable
“FIGURE 5.8: Three assumptions of the linear model (left side): Gaussian distribution of the outcome given the features, additivity (= no interactions) and linear relationship. Reality usually does not adhere to those assumptions (right side): Outcomes might have non-Gaussian distributions, features might interact and the relationship might be nonlinear.”
― Interpretable Machine Learning: A Guide For Making Black Box Models Explainable
― Interpretable Machine Learning: A Guide For Making Black Box Models Explainable




























