Agree it's essential reading for any serious software developer. It pretty much sums up the way I used to think about using multiple programming languages and using the right tool for right job. Neatly describes complexity and simplicity in Software. Lots of insights about FP vs. OOP.
There are two parts to this work. The first part explains the ways in which OO programming, functional programming, and relational algebra respectively try to solve the sprawl of accidental complexity (accidental state, behaviour, and performance) in software systems. The second part describes a new way to model the same: the authors have named it Functional Relational Programming (FRP).
Past reviews have described the first part as essential reading, while the second part is characterised as esoteric. I beg to differ. For readers & practitioners of software engineering nowadays, the awareness about the paradigms of FP, OOP, and relational algebra can be assumed, even if they do not use it in their day-to-day. To such an audience, the first part is not more than a refresher of the ideas, and an exercise in contrast between their approaches.
The second part, on FRP, is an elegant description of a system with separation of concerns at multiple levels: between state, behaviour, and performance concerns of the system; between the infrastructure and the logic of the system. I found this paper at an opportune time when I have been trying to make sense of the tar-pit I tend to work in, the post-modern data infrastructure of circa 2020, and I have found it to be worthy of my highest praise.
P.S: A complimentary read would be Maxime Beauchemin's (creator of Apache Airflow) essay on Functional Data Engineering. c.f: https://medium.com/@maximebeauchemin/...
Excellent read. I found this cited in "Data Oriented Design" by Richard Fabian, and while the book by Fabian works as a nice kind of cookbook for a specific kind of games programming, I found that this paper satisfied my yearning for a more thorough explanation of designing software based on the relational model.
I'm still a believer in the OOP-object, but Out of the Tar Pit presents convincing arguments about the inherent limits of having information in the system accessible only in a network-model (with objects calling other objects it knows about), and how a run-time relational database could help circumvent that limitation.
The paper does a very good job at presenting just the right amount of background to state the authors case, and I feel quite convinced in the combination of functional programming for business logic and relational algebra for representing state. I particularly like the view of essential state as only that which is visible to the user, which seems to echo notions from the agile processes that also puts the users experience of the system as a driving factor. My job as an engineer is to make the program provide the desired service to the end user, and to eliminate as much as possible any intermediate complexity behind the scenes.
This just might end up on my list of essential reads.
Maybe because I recently read a lot of papers on the topic and now view everything through the "effect" lens, it seems like FRP can be nicely implemented (throwing away relational part) using high- and low-level interpreters. This way you can: - both clearly separate data and control flows, as stated by the paper; - put all essential logic into high-level effects, which just declaratively state what the system is supposed to be doing; - test it independently from accidental logic; - put all accidental logic into low-level effects, which are used by high-level interpreters.
Even complexity of the system grows linearly when you use effects: because they, in general, do not depend on each other.
I'm not so sure on the whole "Relational" topic, though, but mostly because I'm not well versed in it. I may update this review after I read more on the topic, but now it's not completely clear how it can even be implemented, let alone used.
Out of the Tar Pit offers you understand why your system is so hard to comprehend. If you have big business you & your colleagues have been working for a while then you tend to find yourself that it's hard or close to impossible to fully comprehend your system.
The author speaks about harsh of testing and understanding systems; complexities: Essentials, Accidental; types of approaches: OOP, FP and Logical and how they in different ways help / discourage eliminate & understand an overall picture of your system.
On top of that he adds practical suggestions on how regardless of the used approach you can decrease complexities you have in the system you develop (avoid, separate); what approach one must embrace: Rational Model (similar to SQL paradigms: structure, integrity, manipulation, and data independence); what must avoid: Data abstractions (subjectivity and data hiding) (they tend to be wrongly re-used and thus increase complexities of hidden, unused data structures) to have an easier system.
At the end he offers theoretical (not yet created) approach of FRP that embraces all suggestions made in article.
A classic paper from 2006 I first heard about in one of Rich Hickey's lectures. Stopped reading halfway; the second half (where the authors elaborate on a possible approach that follows the strategies they recommend in the first half) didn't seem very relevant. A lot of ideas I found very easy to agree with; one can see how much of the modern software landscape is informed by ideals of declarative code and immutable state. Personal favorite: Section 6, where performance and infrastructure are framed as accidental, NOT essential complexity of software - feels exciting to think of performance (i.e. hardware) bottlenecks as tractable problems that don't exist in the future. Future of Coding podcast has a really fun episode about this paper, which points out its strengths and weaknesses and greatly enhances the reading experience: https://futureofcoding.org/episodes/063.
A very interesting reading for a programmer. Complexity is one of the biggest problems in software development and the only way to deal with it is by simplifying it. This essay explains a way to do that.
Exactly what I'd expect from academic computer science: a few interesting observations defining a problem followed by pages upon pages of a solution that has absolutely no applicability to the real world whatsoever. In this paper, I found the beginning compelling where it strictly defines accidental complexity of a system in contrast to its essential complexity. However, the authors end with an abstract infrastructure that they claim only takes 1500 lines of Scheme to implement (!!) and is completely dependent on a relational data model, which has completely fallen out of favor for scalable systems. Maybe read the first 10 pages if you want, but otherwise don't bother.