It’s hard to believe that The Two Faces of Tomorrow was published in 1979. And, even though 2017 isn’t as advanced and ready for outer space colonization as imagined in this novel’s 2038 (p. 53), the issues addressed in this story are as vital and topical as if Hogan was writing in the present day. Within the past few months, for example, I’ve heard interviews on the impact of “thinking machines” on the near future economy and it was only in the last two years that I saw at least two films dealing with machine/human singularity.
One of the characters in The Two Faces of Tomorrow is working on a dissertation entitled, Evolution of Objective Hierarchies in Goal-Oriented Self-Extending Program Structures (p. 29). It sounds like some of the papers I’ve heard at the university in the last few years. On the very next page of the novel, a character takes notes on an electronic device called a “pad” and turns it off as a meeting comes to a close (p. 30). Tablets and iPads weren’t being developed in 1979. Even the Newton and the Palm Pilot didn’t show up until the ‘90s. Indeed, knowing what computerized wargames looked like in 1979 (sometimes ASCII maps, sometimes text-based using a physical map and counters, and sometimes extremely blocky units and terrain), it’s interesting to see Hogan’s vision of a Battle of Kursk, 1943 game (one of the most often-gamed tank battles of the 20th century) played via hologram terrain against a team at a rival institution (p. 92). In 1979, one needed academic or defense department credentials to play such a linked game, but by the mid-‘80s, The Source information gateway from Dow-Jones, Compuserve network from H. & R. Block, and GEnie network from GE allowed gamers to connect (for anywhere from $6.00 to $12.00 per hour) remotely. So, it really seems like Hogan has been visionary in this book.
Alas, Hogan isn’t simply visionary about positive advances. His consultations with Marvin Minsky (great pioneer in artificial intelligence) have alerted him to the fact that programmed instructions can surprise one in the way the machine processes them and, with a learning program or “Self-Extending Program Structure,” the surprises can be even more common. In The Two Faces of Tomorrow, one of those program structures takes a command or problem so literally that it nearly destroys a group of human beings in solving the problem. This incident (described in the prologue) is the catalyst for all of the events to come.
As a consequence of the unexpected (and dangerous) solution, the characters begin to rethink their assumptions about thinking and learning machines. They receive approval to adapt a space colony and create a situation where the machines might (predictably) try to take over and attack the humans. To make this happen, they build a survival instinct (via programming) in the thinking machine and then, start attacking it to see if, in the worst possible scenario, they could shut down an inimical system. They name the test system with the ominous code-name Spartacus. Does naming the system after a rebel gladiator provide any foreshadowing? Yes! Does the military’s idea of risk management prove just as potentially dangerous as an AI run amok? Yes! Is every attack, counter, and, of course, inevitable counter-attack fascinating? Yes!
One of the things I liked in the book was the description between the scientist-protagonist and his journalist-love interest. The idea of special attraction and interaction between those who are different was well-portrayed early in the book: “With her agile and inquisitive mind and lack of scientific training, she had a tendency to zoom into the heart of an argument from a totally unexpected and often fascinatingly ingenious perspective.” (p. 112) I also liked some of the technological ideas that I had never thought of. For example, to build an incredibly light space colony sphere, “Shell sections to cover in the skeletons were formed by spraying successive layers of aluminum vapor onto enormous inflated balloons of the correct shapes.” (p. 161) Perhaps, my favorite part of the book was when the learning machine adapts to the human trait of “fear” and what happens then with something of an evolved Cold War solution (p. 377). In short, both characters and plot resolution seemed human and satisfying. As far as I’m concerned, this is “hard” science-fiction at its best.