Jump to ratings and reviews
Rate this book

The Little Learner: A Straight Line to Deep Learning

Rate this book
A highly accessible, step-by-step introduction to deep learning, written in an engaging, question-and-answer style.

The Little Learner introduces deep learning from the bottom up, inviting students to learn by doing. With the characteristic humor and Socratic approach of classroom favorites The Little Schemer and The Little Typer, this kindred text explains the workings of deep neural networks by constructing them incrementally from first principles using little programs that build on one another. Starting from scratch, the reader is led through a complete implementation of a substantial a recognizer for noisy Morse code signals. Example-driven and highly accessible, The Little Learner covers all of the concepts necessary to develop an intuitive understanding of the workings of deep neural networks, including tensors, extended operators, gradient descent algorithms, artificial neurons, dense networks, convolutional networks, residual networks, and automatic differentiation.

440 pages, Paperback

Published February 21, 2023

41 people are currently reading
155 people want to read

About the author

Daniel P. Friedman

12 books95 followers
Daniel P. Friedman is Professor of Computer Science in the School of Informatics, Computing, and Engineering at Indiana University and is the author of many books published by the MIT Press, including The Little Schemer and The Seasoned Schemer (with Matthias Felleisen); The Little Prover (with Carl Eastlund); and The Reasoned Schemer (with William E. Byrd, Oleg Kiselyov, and Jason Hemann).

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
5 (26%)
4 stars
11 (57%)
3 stars
1 (5%)
2 stars
2 (10%)
1 star
0 (0%)
Displaying 1 - 3 of 3 reviews
Profile Image for thirtytwobirds.
105 reviews55 followers
July 7, 2023
Overall: a nice introduction to deep learning through Scheme-colored glasses. The Socratic dialog style of The Little Xer series lends itself well to explaining things bit by bit, and after going through the book I had a much better understanding of the building blocks.

With that said, the book isn't perfect. There are a few caveats/annoyances that weren't a deal-breaker for me, but might be for other folks.

First: there's an introductory chapter on Scheme, but I really don't think it would be enough to take someone with zero prior exposure to Scheme and make them ready for the rest of the book. I was okay because I've gone through the Little Schemer and Seasoned Schemer (and do a lot of Common Lisp), but even I found some of the definitions hard to mentally parse. I think you really need some prior practice in Scheme so you don't have to struggle line-by-line through the awkward tail-recursive definitions they use everywhere because schemers are allergic to `loop`.

The Socratic dialog style works pretty well for most of the book, but the insistence on doing *everything* this way holds it back. The book could *really* benefit from adding in a diagram or two to help explain things that are awkward to describe through words, e.g. in chapters 10 and 11 it would have been really helpful to see the traditional diagram of an artificial neuron next to the weight/bias tensors to help keep in mind which parameter/dimension is which.

The "compact notation" they use through the book was more annoying than helpful. They say they used it to make it "easier to read" and "fit snugly in the little boxes". The latter is probably true (though I doubt it adds up to very many pages saved in the end), but the former is not: `(tlen theta)` is not harder to read than `vertical bars with sloped slashes through them around theta`. And if you're doing what you *should* be doing (actually implementing the functions as you go, not just passively reading the answers) then you have to constantly translate between the book's annoying mathy-looking notation and the actual scheme code it's hiding, which gets extremely tedious.

And finally, on the topic of actually implementing things, there's some really annoying magic they try to paper over that will probably bite you at some point. I'll give an example here to hopefully save other people some time:


> 0
0

> (zero? 0)
#t

> (- 3 (+ 1 2))
0

> (zero? (- 3 (+ 1 2)))
. . zero?: contract violation
expected: number?
given: 0

> (= 0 (- 3 (+ 1 2)))
#t


...what? There's a few pieces you need to stumble through to figure out what's happening here.

Their package overrides the vanilla scheme arithmetic operators to allow automatic differentiation, described in Appendix A. This is a bit surprising at first, but fine in the context of the book.

But they don't bother overriding *all* the vanilla scheme functions, e.g. above the `=` works, but `zero?` is the vanilla Scheme version and so explodes when fed something from their overrided `-` function. This is annoying.

But the worst part: they've somehow used whatever Scheme's equivalent of CL's `print-object` method is to make their special thing returned by their overridden functions print like a bare number (e.g. `(- 3 (+ 1 2))` printing as `0`). This is absolutely unhinged, and is what makes the error message (`expected: number? / given: 0`) so wild. If you want to override arithmetic to return a special thing, fine, but please don't trick my REPL into lying to me.

It was surprisingly painful to figure out what the hell the problem actually was because I had a hard time finding something in scheme that would just *tell me what this object is*. This might just be my lack of familiarity with Scheme, and if so: fair enough.

I had my suspicions about the arithmetic overriding, and in Common Lisp I would immediately reach for something like `(type-of (- 3 (+ 1 2)))` that would hopefully shed some light on what I'm looking at (yes, I know type /= class but it would have *at least* given me a starting point). But searching for an equivalent to CL's `type-of` in Scheme just led me to multiple stack overflow pages where people asked how to do this, and the answers were all variations of "Actually you don't want to get the type of an object you silly baby child, you want to use predicates like `number?` because it's more modular and extensible" which may often be the case, but was unhelpful to here me because it's the predicates themselves that were exploding because my REPL was lying to me and I just wanted to figure out what the goddamn object actually *is*.

Eventually I managed to find a third-party package to implement describe (I guess CL really *has* spoiled me). I don't know what a "planet williams" is, but apparently it's alien technology that allows you to see through lies:


(require (planet williams/describe/describe))

> (describe 0)
0 is a byte (i.e., an exact positive integer fixnum between 0 and 255 inclusive) zero

> (describe (- 3 (+ 1 2)))
#(#[procedure:dual] 0 #[procedure:...utodiff/B-prims.rkt:20:10]) is a mutable vector of length 3


Thank you, Planet Williams. This puts my mind at ease that it's the authors doing something wild, and I haven't lost my mind when I see that `0` does not satisfy `number?`.
Profile Image for Ivan Chernov.
208 reviews9 followers
May 22, 2023
В общем, книжка довольно маленькая. Она состоит из последовательного описания двух методов в машинном обучение: градиентный спуск и нейронные сети. И они описаны довольно хорошо, хотя и используется для этого lisp. Отдельно прикольно, что в конце книги дают полный транслятор из надмножества lisp в настоящую программу.

Основная беда в формате - книга устройна как диалог и читать слишком сложно. Если бы тот же текст был бы просто в формате текста с объяснениями, было бы лучше. Но видимо эта книжка может быть ориентирована на младшую аудиторию, от чего выбран такой формат.

Выдержка будет здесь:
https://vanadium23.me/openbox/books/t...
8 reviews1 follower
May 29, 2023
Good over view of Neural Nets and Machine Learning. The book is in the form of a dialog between teacher and student. If you follow along you write little scheme programs to implement the main ideas of the NN and ML. A great way to introduce you to the concepts!
Displaying 1 - 3 of 3 reviews

Can't find what you're looking for?

Get help and learn more about the design.