The recent flurry of stories about the issues with Artificial Intelligence seem to have died down for the moment, but I figured it’s worth discussing an approach the late James P. Hogan brought up in the 1979 novel “The Two Faces of Tomorrow.” The idea of machines that think or appear to do so has been a staple of science fiction for centuries — on both the possibilities for good and bad. I won’t go into a long list of relevant works, other to mention Isaac Asimov and his Three Laws of Robotics.
The Three Laws, presented to be from the fictional "Handbook of Robotics, 56th Edition, 2058 A.D.", are:[1]
- The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov’s robots were mostly humanoid; they had eyes, ears, other sensors, could understand spoken commands and speak as well — and they could act autonomously as long as they stayed within the parameters of the Three Laws.
The laws seem fairly comprehensive and straightforward, but putting them into practice is not. Asimov was able to write many stories on the ways the laws in practice could prove impossible to reconcile, had loopholes, and unexpected/unintended consequences. Their simplicity is also deceptive.
How is a robot to understand what a human being is, or what is an injury? What constitutes a legitimate order? How can a robot be made to consider how far the consequences of its actions or inactions reach? How does it reconcile ambiguities or conflicts? Humans struggle with the Trolley Problem — so how we do expect machines using artificial intelligence (A.I.) to cope?
It’s not an academic exercise, not if we are going to turn self-driving cars loose in the world. The argument that they’ll likely kill fewer people than human drivers do is not entirely persuasive. It’s not academic either, with the rapid pace of developments in the field — and the FOMO of the Tech Bros and Big Money to cash in on it ASAP. Foresight and the long term is not their biggest priority. Billions are at stake — and already being made.
Everyone seems to be rushing to claim they are putting A.I. to work, and there have already been some ‘incidents’. where it produces unexpected results, and where it is being abused. There’s also the question of how A.I. systems are being trained — as in the works of human creators are being appropriated without consent or remuneration. So what might we do differently?
The Future Isn’t What It Used To Be
In some ways, Hogan’s 1979 The Two Faces of Tomorrow is dated. Set in the mid-21st century, there’s no mention of climate change. It looks like fascism is not an issue and a lot of progress has been made. Humans have a vigorous space program, with bases on the moon and a lot of orbital infrastructure. (No billionaires with vanity spaceship hobbies or thousands of satellites filling up orbital space.) Autocabs make travel effortless — running on defined routes. Energy is not a problem, not with working fusion reactors and solar panels.
Dr. Raymond Dyer, the main character, is an expert in computing and artificial intelligence. He heads a team of researchers and graduate students working on artificial intelligence at a university. He’s a top consultant to the government on computing issues and has been one of the people responsible for upgrades to the computing systems that manage the world. He’s also a human being facing the end of a relationship which has become unsatisfactory to both parties.
(You know there had to be biology in there somewhere, right?)
Dyer is interviewed by a reporter — Laura Fenning — who is interested in the lab he runs where they use computers to create a simulated environment that mimics the real world, and where they test artificial intelligence programs to see how well they can be taught to act in it, safely and predictably. How can they learn the things humans learn about the way the world works growing up — without the millennia of evolutionary pressures to shape nervous systems to cope with the world that humans have? That’s the core problem. (Or as the book puts it, how do you give computers common sense?)
Fenning, who happens to be bright, female, and attractive, challenges Dyer’s faith in the technology and their understanding of it. Dyer is very firm that everything is under control and there are no serious obstacles ahead. He’s confident what they are doing is going to provide the answers needed to continue to make progress.
He’s also wrong.
The book begins with a prolog set on the moon. A survey crew has determined the landscape needs to be reshaped for an engineering project. They feed the survey data into the TITAN computer systems that are part of the lunar infrastructure with a query about what resources are available to do the work. The system, has received some A.I. upgrades , machine self-teaching routines called HESPER. It makes some connections on its own initiative and implements the solution it has come up with. The result gets the task done — but is a near fatal disaster and worse, completely unexpected.
Dyer is subsequently called to Washington in the aftermath where he is consternated over the incident. The technology he helped developed has produced an example of emergent behavior — and no one can predict what else might occur. The questions are: should they remove the HESPER upgrades that caused the problem? Can they develop a better solution, or should they just give up on further A.I. work given the dangers. If they come up with a solution, how do they test it safely?
That’s pretty much the situation we are in today. A.I. is happening but we don’t really have a handle on it. The difference in Hogan’s story is that it’s not being left up to market forces to ‘find’ a solution.
Dyer is left with a dilemma. If they continue adding A.I. upgrades to the computer systems that run the planet, they risk more incidents like the one on the moon. If they don’t find a way ahead, they risk increasing bottlenecks because the tasks being faced are not going to get simpler. There’s tremendous pressure to find an answer — but the answer they thought they had in HESPER proved it wasn’t. So how do they proceed — especially when there is only one planet and they can’t afford to gamble with it.
The Janus Project and Spartacus
Dyer comes up with an inspiration. He remembers a space station set up to do critical biological research in space because the risks of a lab leak on Earth would have been too great. He realizes the same approach could be used to resolve this dilemma. He submits his suggestion and the powers that be adopt it full throttle. They select a nearly finished space manufacturing facility and orbital city to be a test tube world. They give it the code name Janus after the Roman God because the future could go two ways. Either they find a way to implement A.I. safely, or they confirm that it must be frozen where it is.
A huge operation begins under military control, because of security concerns, because of the scale of the project, and the need to prepare for worst case scenarios in case the A.I. programs go bad. Dyer heads up the computing team drawing on his own people and the work they’ve been doing on ways to program useful ‘instincts’ into computing systems to seek to optimize their behavior and respond to events. The test system that will be in charge of Janus is code named Spartacus. (Here’s the background on that choice, which is a bit of black humor.) They don’t just intend to see if it can run the station — they want to see if they can provoke it into actions that become actively hostile AND see if they can “pull the plug” if it does.
As things proceed, Dyer manages to get the Fenning ‘drafted’ into the project, on the reasonable grounds they need an outside observer who can provide perspective on how it works out and communicate it afterwards. (Did I mention Dyer also found her attractive, and vice versa?)
Part of the orientation process is demonstrating to the people who will be going up to Janus the tools Spartacus will have at its disposal. The decision was made to equip it with all of the facilities that would be coming into use. allowing Spartacus to maintain itself and do hardware upgrades. All of the hardware was designed to be accessible to a variety of repair drones, and the manufacturing end of the station would be able to provide materials for construction as needed. A test in a lab to see if Spartacus could cope with human efforts to disrupt its drones was… interesting.
Once everyone has moved up to Janus, they find life with an A.I. system running things is actually rather nice, making a strong case for why pursuing it is desirable — IF they can find a way to avoid the multiple worst-case scenarios. Keep in mind, Spartacus doesn’t actually think like a human, and it perceives the world around it in a way that’s alien to humans. The potential for misunderstandings and conflict is huge.
As human attempts to irritate Spartacus by disrupting its operations begin (with the idea being that such events would be inevitable in the real world), the system doesn’t understand what’s happening, but begins to realize there are external actors around it of which it slowly becomes aware. As it attempts to carry out its programmed functions, and as the humans attempt to block it, the situation escalates. The worst-case scenarios begin to unfold and casualties start to mount while both Spartacus and the humans attempt to make sense of what is happening.
I’m not going to comment further on the story other than to note there is a happy ending which makes sense within the context of the story, but as the story notes, is despite nearly impossible odds against it. Although The Two Faces of Tomorrow doesn’t correspond exactly to the situation we have today regarding A.I., it still addresses many of the challenges we face and makes it easier to understand what the problems are. As we are fumbling towards ways to regulate it and make it more than a tool to make the rich richer, it’s worth looking at as well as being a good read.
It would make a good techno-thriller movie as well.