This is a bit of an old story, but the head of IBM’s Watson’s division, David Kenny, wrote an open letter to Congress stating, essentially, that there were no dangers to artificial intelligence and that regulations and taxation of AI and robotics or a UBI to deal with its effects would be bad things that would lead to the collapse of a promising industry, economic slowdown, dogs and cats living together, etc., etc. It is, in other words, the kind of self-serving tripe that lobbyists write to Congress every day of the week and twice on Sundays. IBM is a clear leader in this space and has no interest in having the economies of the game change via taxation or have to work around regulations in order to get to market. Profit before society, as always.
So the letter in and of itself is not interesting to me. But it did make me think about the process by which AI will become a danger to society. There is both a short to medium term issue and a longer term issues. I think Kenny’s letter shows how both the short and long term peril are going to be hard to deal with in the face of the fact that corporate entities are largely the ones creating artificial intelligences. In short, in the long term, I don’t think the software industry is collectively smart enough to not let the beast off the chain.
The short to medium term concerns about artificial intelligence and automation are about economic disruption. Supporters of AI will point to the very real benefits that AI can bring in terms of finding new and better solutions to problems and in relieving human workers from dangerous and/or boring jobs. The danger to society is that AI and automation destroy those jobs and either there is nothing to replace them (the lump of labor fallacy might not actually be a fallacy) or natural economics cannot replace those jobs fast enough to avoid societal upheaval. Kenny’s hands off approach, disdain for regulation and taxation and is one route to that social upheaval, even assuming that jobs lost to AI can eventually be replaced.
The larger danger though is how companies make programs. One of the many ways that artificial intelligence advances is by machine learning. To over simplify a great deal, it is a form of trial and error: the machine is given a set of data on which it makes a set of predictions. Those predictions are tested against the real world and those results are used to refine the machine’s decisions in a loop. The machine gets better and better at making decisions, in theory. This feedback process can and often does happen automatically and can happened extremely rapidly when it does, leading to truly impressive breakthroughs. Which takes us to the control problem.
The strong version of the control problem concerns itself with controlling artificial intelligences when they are smarter than us. How do we keep them form smashing us on the way to silicon utopia? But there is a weaker version, I think, that will become an issue much sooner. How do we ensure that AI systems make decisions in line without what humans would want and how to we ensure that they do not pursue their goals regardless of unforeseen consequences? We are already failing somewhat at that before machines have gotten all that smart. One of the keys is going to be the quality of the software that creates this machines. If that software has the proper controls, written to the proper level of quality, then we stand a good chance of avoiding the worst of the controls problems.
And if we are going to depend largely on private companies to provide that software, we are probably fucked.
I write that line as someone who has made a career of building software, in various roles, in the private sector almost exclusively. I am proud of a lot of hat I have helped build, and I have worked with and continue to work with some of the brightest most dedicated people I have ever known. And we are still fucked if it is left entirely to private industry. Private industry does not really know how to write bug free software, not collectively, not as a whole. It uses the wrongs kinds of processes, has the wrongs kinds of incentives, and uses the wrong kinds of tools for me to feel comfortable thinking that it can put in the kinds of robust safeguards that the control problem demands.
There is no guarantee that proper safeguards will even be put into these systems. Business are, after all, in business. That means making sure that they get to market as fast as they can with a product that people are willing to pay for. In practice, this means that a lot of software is built using something called Agile Process. There is a lot to Agile, but for our purposes the important part is this: software features are added only as they are requested by the business or needed to ensure some requested feature functions properly. Things are built to go out the door not as complete items but as a series of ever improved upon next steps. It is entirely possible that proper controls are not introduced from the beginning because they are not “needed” from the beginning. I would say it is more than possible, it is likely, and the likelihood grows ever higher as AI migrates away from specialists and into the hands of more traditional product teams. Kenny is embarking upon a project that is going to have enormous consequences for society and yet he wants no societal oversite whatsoever. What are the odds he listens to an engineer who wants to slow the next set of releases in order to put in safeguards he cannot absolutely prove are required?
Even if the safeguards are in place, there is ample reason to think that as we get deeper into the commercialization of AI that we are more likely to introduce bugs into those safeguards. First, because writing bug free software is hard to do, even when we make it a priority. Security software, software where bugs squashing is absolutely critical, still has major flaws, for example. Second, much commercial software is written in programming languages that lend themselves to run time bugs, that is bugs that happen during the execution of the program. Those kinds of bugs can often lead to unpredictable behavior, exactly the kind of thing that the control problem addresses.
Commercial software tends to be written in dynamically typed languages that allow for mutable state. What does that mean? Dynamically typed means that the kind of thing the program is working on is not defined when the rules for the program are written but rather is allowed to be inferred when the program is run. It’s like I have a box I am going to put things in. In dynamically typed worlds, that box may be intended to hold dogs, but I never tell anyone that, so people are free to send me cats, dogs, whatever. Sometimes, when you do that, I will end up doing weird things, like putting a houseplant in the box and feeding it kibble, because a houseplant happens to fit in a dog sized box. In programs, that kind of behavior can lead to odd results. Mutable state just means that the value of a variable can change over time. This can be a problem in that the value may change in ways I am not prepared to handle, which, again, leads to unexpected behavior.
These are oversimplifications, of course, and there are strategies for dealing with these errors as they occur. But the strategies are not perfect, of course, and they have their own drawbacks. And, of course, working in dynamically typed languages with mutable state can offer significant programmer productivity benefits that make living with their other drawbacks worthwhile. For normal business applications, the tradeoffs are usually easily worth the benefits. But with AI, that is likely not to be the case. Mistakes in the control of those systems could lead to devastating consequences. And everything about how private industry builds software makes those consequences more likely to come about, not less.
I am not claiming that there are easy answers for these problems. But when one of the leaders of commercial AI treats it as just another widget, when they argue that society needs to be kept as far away as possible from how these systems are built and deployed? Well, that is a strong reason for worry.