to-read
(1197)
currently-reading (8)
read (511)
did-not-finish (0)
philosophy (168)
math (82)
history (40)
science (31)
programming (29)
machine-learning (27)
currently-reading (8)
read (511)
did-not-finish (0)
philosophy (168)
math (82)
history (40)
science (31)
programming (29)
machine-learning (27)
literature
(21)
music (19)
data-science (15)
nlp (12)
ai (11)
functional-programming (10)
religion (10)
causality (7)
data-quality (7)
python (7)
music (19)
data-science (15)
nlp (12)
ai (11)
functional-programming (10)
religion (10)
causality (7)
data-quality (7)
python (7)
“Our brain is therefore not simply passively subjected to sensory inputs. From the get-go, it already possesses a set of abstract hypotheses, an accumulated wisdom that emerged through the sift of Darwinian evolution and which it now projects onto the outside world. Not all scientists agree with this idea, but I consider it a central point: the naive empiricist philosophy underlying many of today's artificial neural networks is wrong. It is simply not true that we are born with completely disorganized circuits devoid of any knowledge, which later receive the imprint of their environment. Learning, in man and machine, always starts from a set of a priori hypotheses, which are projected onto the incoming data, and from which the system selects those that are best suited to the current environment. As Jean-Pierre Changeux stated in his best-selling book Neuronal Man (1985), “To learn is to eliminate.”
― How We Learn: Why Brains Learn Better Than Any Machine . . . for Now
― How We Learn: Why Brains Learn Better Than Any Machine . . . for Now
“For advanced analytics, a well-designed data pipeline is a prerequisite, so a large part of your focus should be on automation. This is also the most difficult work. To be successful, you need to stitch everything together.”
― Data Management at Scale: Best Practices for Enterprise Architecture
― Data Management at Scale: Best Practices for Enterprise Architecture
“Regular expressions are widely used for string matching. Although regular-expression systems are derived from a perfectly good mathematical formalism, the particular choices made by implementers to expand the formalism into useful software systems are often disastrous: the quotation conventions adopted are highly irregular; the egregious misuse of parentheses, both for grouping and for backward reference, is a miracle to behold. In addition, attempts to increase the expressive power and address shortcomings of earlier designs have led to a proliferation of incompatible derivative languages.”
― Software Design for Flexibility: How to Avoid Programming Yourself into a Corner
― Software Design for Flexibility: How to Avoid Programming Yourself into a Corner
“So the syntax of the regular-expression language is awful; there are various incompatible forms of the language; and the quotation conventions are baroquen [sic]. While regular expression languages are domain-specific languages, they are bad ones. Part of the value of examining regular expressions is to experience how bad things can be.”
― Software Design for Flexibility: How to Avoid Programming Yourself into a Corner
― Software Design for Flexibility: How to Avoid Programming Yourself into a Corner
“Yann LeCun's strategy provides a good example of a much more general notion: the exploitation of innate knowledge. Convolutional neural networks learn better and faster than other types of neural networks because they do not learn everything. They incorporate, in their very architecture, a strong hypothesis: what I learn in one place can be generalized everywhere else.
The main problem with image recognition is invariance: I have to recognize an object, whatever its position and size, even if it moves to the right or left, farther or closer. It is a challenge, but it is also a very strong constraint: I can expect the very same clues to help me recognize a face anywhere in space. By replicating the same algorithm everywhere, convolutional networks effectively exploit this constraint: they integrate it into their very structure. Innately, prior to any learning, the system already “knows” this key property of the visual world. It does not learn invariance, but assumes it a priori and uses it to reduce the learning space-clever indeed!”
― How We Learn: Why Brains Learn Better Than Any Machine . . . for Now
The main problem with image recognition is invariance: I have to recognize an object, whatever its position and size, even if it moves to the right or left, farther or closer. It is a challenge, but it is also a very strong constraint: I can expect the very same clues to help me recognize a face anywhere in space. By replicating the same algorithm everywhere, convolutional networks effectively exploit this constraint: they integrate it into their very structure. Innately, prior to any learning, the system already “knows” this key property of the visual world. It does not learn invariance, but assumes it a priori and uses it to reduce the learning space-clever indeed!”
― How We Learn: Why Brains Learn Better Than Any Machine . . . for Now
Goodreads Librarians Group
— 320535 members
— last activity 1 minute ago
Goodreads Librarians are volunteers who help ensure the accuracy of information about books and authors in the Goodreads' catalog. The Goodreads Libra ...more
Kitap Kokusunun Peşinde
— 31 members
— last activity Nov 30, 2023 12:25AM
Kitaplar birşeyler anlatır. Kitaplar yeni hayatlar yaşamanızı sağlar. Olmayan diyarlara gider olmayan insanlarla tanışırsınz. Onlarla konuşur dost olu ...more
Emre’s 2025 Year in Books
Take a look at Emre’s Year in Books, including some fun facts about their reading.
More friends…
Favorite Genres
Polls voted on by Emre
Lists liked by Emre



























































