Dense but reader friendly, "Machine Learning Refined" is a well-written and handsomely illustrated book that covers an enormous amount of fundamental material in a relatively light page count. In a number of ways, it is a spiritual cousin of and worthy complement to the also-excellent "Learning from Data" by Abu-Mostafa, Magdon-Ismail, and Lin, which covers similar material more from the standpoint of statistical learning theory (think VC dimension).
Rather than exhaustively cover every variation of the various machine-learning algorithms out there, the authors wisely choose to focus on themes common to them all. In particular, I appreciated the recurring theme of basis functions as they relate to both obvious cases (polynomial and Fourier bases, etc.) and more abstract ones (namely neural networks, where increasing depth increases the "flexibility" in one's bases).
There is indeed some heavy lifting in the book, making some sections (or even entire chapters) less essential for complete digestion on a first reading. That said, none of it is pedantic, and the extra space devoted to some of the lengthier derivations is bound to be appreciated by those who have encountered skipped steps or dreaded left-to-the-reader punting in other texts.
Were I to design the book, I might reorder some of the material (I'd not have put the dimensionality-reduction chapter last, for instance), but this is hardly a major complaint. I hope this book catches on and that the authors stay invested enough to prepare future editions.