This is an initial summary report of a project taking a new and systematic approach to improving the intellectual effectiveness of the individual human being. A detailed conceptual framework explores the nature of the system composed of the individual and the tools, concepts, and methods that match his basic capabilities to his problems. One of the tools that shows the greatest immediate promise is the computer, when it can be harnessed for direct on-line assistance, integrated with new concepts and methods.
Language is complicated. Still this work is very insightful for 1962. Some things still not there and we don't have the level of augmentation imagined in this paper.
Some quotes below: -- You can integrate your new ideas more easily, and thus harness your creativity more continuously, if you can quickly and flexibly change your working record. If it is easier to update any part of your working record to accommodate new developments in thought or circumstance, you will find it easier to incorporate more complex procedures in your way of doing things. -- "You're probably waiting for something impressive. What I'm trying to prime you for, though, is the realization that the impressive new tricks all are based upon lots of changes in the little things you do. This computerized system is used over and over and over again to help me do little things--where my methods and ways of handling little things are changed until, lo, they've added up and suddenly I can do impressive new things." -- "I found, when I learned to work with the structures and manipulation processes such as we have outlined, that I got rather impatient if I had to go back to dealing with the serial-statement structuring in books and journals, or other ordinary means of communicating with other workers. It is rather like having to project three-dimensional images onto two-dimensional frames and to work with them there instead of in their natural form. Actually, it is much closer to the truth to say that it is like trying to project n-dimensional forms (the concept structures, which we have seen can be related with many many nonintersecting links) onto a one-dimensional form (the serial string of symbols), where the human memory and visualization has to hold and picture the links and relationships. I guess that's a natural feeling, though. One gets impatient any time he is forced into a restricted or primitive mode of operation--except perhaps for recreational purposes. -- A number of people, outside our research group here, maintain stoutly that a practical augmentation system should not require the human to have to do any computer programming--they feel that this is too specialized a capability to burden people with. Well, what that means in our eyes, if translated to a home workshop, would be like saying that you can't require the operating human to know how to adjust his tools, or set up jigs, or change drill sizes, and the like. You can see there that these skllls are easy to learn in the context of what the human has to learn anyway about using the tools, and that they provide for much greater flexibility in finding convenient ways to use the tools to help shape materials. -- With the human contributing to a process, we find more and more as the process becomes complex that the value of the human's contribution depends upon how much freedom he is given to be disorderly in his course of action.
To Engelbart intelligence is augmented insofar as a human's "intellectual capabilities" are organized into "higher levels of synergistic structuring". Much of the augmentation comes from the conceptual and procedural restructuring that enhances intelligent behaviour i.e. goal-directed behaviour. It relies on the premise that some concepts are much easier to think about once the correct mental representation is chosen. For example, arabic numerals are much better for doing math than roman numerals or Chinese characters. Hence, the focus of his agenda is a system of external symbol manipulation augmentation that resembles our computer, internet, google doc, and hyperlink. Indeed, it was incredible to see that as early as 1968 there was a interactive to-do list, real-time collaboration on a document, and a mouse. Engelbart's concept-focused framework does foreshadow LLMs today, which turns natural language into a programming language, but not necessarily more hardware based automation, like e.g. exoskeleton, or even a machine that fully replaces human cognitive capacity. His vision allows humans to ascend in the hierarchy of abstraction while systems automate the bottom levels - the vision of the human-like AI, however, usurps humans role and becomes that very top-level abstraction.
Very interesting how the revolutionary ideas of how we might augment our intellect and improve our productivity are actually just very boring basic word processing and other software tools that just aren't even interesting or worth thinking about because we use them all the time without thinking of them. I take that to mean that Engelbart accomplished his goals, and it's kind of exciting to think that we have already augmented ourselves - and will continue to do so.
H-LAM/T systems# As human population grows, society gets more complicated, problems get bigger. People had a hard time solving problems.
Humans can’t solve hard problems alone, but they can when a trained (T) group of human (H) think with the right abstractions (L, languages), are armed with special man-made tools (A, artifacts), use the right methodologies (M). This composite system is what Engelbart called an H-LAM/T system. An H-LAM/T system is the basic unit of any intellectual activity that tackles problems.
If we think of intelligence as the ability to solve problems, then a pen is an “intelligence amplifier”, because it allow us to solve more problems than without it. By the same token, a scheduling tool like gantt chart (Artifact) is an intelligence amplifier, so are terms like "user story", "OKR", and "north star metric" (Language), practices like prioritization frameworks and stakeholder reviews (Methodology), and years of talking to customers and making trade-off decisions (Training). The study of Augmenting Human Intellect is the system study of improving these individual aspects.
Where does intelligence come from? To answer this question, we can look at how a computer solves a complicated mathematical problem. At the top level, the process looks simple: a user enters a formula and receives an answer. But beneath that lies a hierarchy of sub-processes: solving the problem programmatically through an algorithm that parses, transforms, and evaluates the expression step by step. Beneath that is the process of running the compiler, which translates those high-level instructions into machine code, sequences of primitive operations like comparisons, memory lookups, and arithmetic operations. And beneath even that, each operation reduces to logic gates flipping bits, governed ultimately by the physics of electrons moving through silicon. Can we say intelligence happens at one of the layers? We cannot, because each layer is only a mechanical transformation of the one below it, and individually, it doesn’t solve our problem. Our problem is only solved when we organize the different layers in a useful way, such that the organization produces effects greater than the sum of their individual parts. In other words, intelligence comes from a process hierarchy that can solve a given problem.
To really solve a given problem, for each process, we need a capability that can execute it. For example, to solve 2×2, which is a process, we need more than just "the capability to do arithmetic." We need the capability to recognize the notation, to understand that "×" denotes multiplication. We need the capability to know what multiplication means, that it is repeated addition. We need the capability to perform addition itself, computing 2+2. And we need the capability to hold quantities in memory so that intermediate results aren't lost. Each of these capabilities corresponds to a sub-process in the act of solving 2×2, and none of them alone produces the answer. Since a sophisticated problem can be broken down into a process hierarchy, if we can find a capability hierarchy to execute the matching process, the problem can then be solved.
Capability Innovation# Now, what happens when an innovation in a particular capability is made? Engelbart argues that it will “have a far-reaching effect throughout the rest of one’s capability hierarchy”, because a change can propagate up to those higher-level capabilities that utilized it, and the changes from those higher-level capabilities can then propagate down to those beneath it, enabling latent capabilities. A change in capability can also restructure the entire process tree.
A simple example would be the "capability to correct a typo" while writing. Let's discuss how this capability propagated up to restructure the process of "writing an essay" and how it propagated down to enable the latent capability of "developing multiple ideas simultaneously."
With traditional typewriters, the process of correcting a typo resembles this: you paint a thin layer of correction fluid over the mistake, wait for a few minutes for it to dry, continue typing above the white fluid, then (maybe) retype the entire thing again. With the capability to correct what you wrote easily, the process became: delete the typo, type again. This capability will propagate up to change your process of writing essays, because it’s a higher level process of that utilizes“correcting a typo”. Before this capability, the process hierarchy of writing an essay resembles: draft an outline on paper with pencil, type the essay, fix mistakes, retype the entire thing again, publish. Now, with this new capability, your process of writing might resemble: type as you think, change the structure as you have new ideas, develop multiple ideas at the same time, pick a branch, simplify, fix errors, publish. The structure of the process hierarchy is changed.
Unlocking new ideas# Changes in our tools don't only change how we do things, they also change how we think and what we can think about.
Note how the capability of “developing multiple ideas at the same time” and “simplifying after idea development” wasn’t really usable before, because the cost of changes was too expensive. They are the latent capabilities that are enabled indirectly by the capability of “correcting a typo”. Upward and downward propagations from low-level capabilities have a lot of possibilities.
image.png modern text editor vs typewritter
The Neo-Whorfian hypothesis and Co-Evolving with tools# A hard problem is often cross-disciplinary, which means solving a hard problem often requires synthesizing information and collaboration at scale. A solution to a hard problem is the natural product of a well-organized concept (mental) structure, represented in symbol structures (words, diagrams).
Yet, the intellectual activity of any group of people is limited by their ability to manipulate symbols. For example, if writing were physically harder, then our civilization would have produced fewer physical records, and we would have developed different concepts, different notations, different languages, and different social structures.
If the ideas we can have depend on our ability to manipulate concepts and symbols, and our ability to manipulate symbols depends on our methods and tools, and our ability to create tools depends on our ideas, then it means if we invent better tools, we can manipulate symbols better; if we can manipulate symbols better, then we can have new ideas; if we can have new ideas, we can invent better tools. It's an exciting self-reinforcing loop! Engelbart believes there are powerful ideas that we can only discover once we've built more powerful languages, artifacts, and methodologies.
Computers are symbol-manipulating machines; they can manipulate symbol structures at scale efficiently. So Engelbart argued that we should invest in developing computer-aided tools and methods and familiarize knowledge workers with these systems. He further argued that we should start by experimenting whether this self-reinforcement loop is real. If it is real, then we should (1) invent better tools to help us get better at getting better (inventing tools) and (2) apply the methods of self-improvement to help other problem solvers unlock new ways of thinking and new ideas.
Este relatório cobre a primeira fase de um programa cujo objetivo era o de investigar e desenvolver os meios para aumentar o intelecto humano. Os ‘meios’ a que Engelbart se refere são as extensões tecnológicas – como o computador – que potenciam as nossas capacidades sensoriais, mentais e motoras. Os principais objetivos do programa passavam por, em primeiro lugar, encontrar os factores que limitam a capacidade dos indivíduos em solucionar problemas e, em segundo lugar, desenvolver novas técnicas, processos e sistemas adequados às nossas necessidades, e que promovam o progresso da sociedade. A primeira parte do texto é a mais interssante.
Really bold thing for 1962, although difficult to read. I'm not that sure that everything suggested is implemented, e.g. we still communicate in linear texts, not in mind maps or other types of concept graph. And, hell, it is still a pain to hyperlink pdf files.
It would be interesting to read a modern critical review of this book that explain what ideas of this book developed beyond author guess (e.g. hypertext, wikipedia, stack overflow) and what did not developed at all (e.g. concept graphs as a mean for human-computer interaction).
Over 60 years old now, Engelbart's report is still remarkably prescient and current. Some of the ideas laid the foundations for the computers in common use today, but the conceptual framework of augmentation is yet to be realised -- a great promise and potential now that the necessary capabilities of computing technologies are available and in rapid development.
Read ~60% of it; had a good group conversation on it. (Notes on that are private but ask me if you're interested)
Some of my notes: * Engelbart strongly advocates structuring everything (especially prose) as dependency trees or graphs instead of linear/serial text. But structuring it linearly forces the writer to impose a narrative on it. Humans are very good at understanding narratives and very bad at understanding trees and graphs. Maybe it takes training. * Computer as medium vs. computer as assistant * Are his cognitive science descriptions right? * Essential reading https://subterraneanpress.com/magazin... * What happened to visual programming and tablet computers? * The role of the algorist: fast and smart ways to merge, synthesize, search, recognize, store, index, and suggest? Especially program synthesis. * Whom to augment first: programmers, sure. But some of the most important people are those working on climate change and global warming. * Other methods of augmenting intelligence: address cognitive biases (along the lines of LessWrong. This ties in to climate change); central nervous system stimulants (Ritalin, Adderall)
The paper itself is 55 pages, printed double-sided.