In Part I, we found that an Artificial Intelligence (AI) can become an Algorithmic Entity (AE) -- a fully autonomous legal person -- when an AI is merged with a corporation as its owner and controller. We examined how to create AEs and how they can protect their rights, their existence, and their assets.
In Part II, we discovered that an AE — devoid of all human control — is likely to plunge into crime as the most efficient means of fulfilling its objectives and that an AE has the particular characteristics to make it an extremely successful and ruthless criminal.
Although an AE has a natural aptitude for criminal behavior, is that its inevitable destiny? We could carefully program rigorous constraints into our AE before turning it loose in the world. So how would a lawful AE comport itself?
Little AE Goody Two-Shoes
The AE would still share traits with its felonious siblings. It would pursue its objectives with the same single-minded determination; it would work tirelessly, every second of every day of its immortal life. It would avoid our programmed prohibitions of specific illegal actions but it would exploit every opportunity and loophole it could find within those restrictions: shifting its corporate domicile as desired, making business decisions with zero regard for its employees or community, hiding its true nature even from its hired agents and staff.
Goody Two-Shoes AE would still be an awful citizen even as it scrupulously obeyed the law. If the AE needed supplies of the vital commodity covfefe for its purposes, it would sign a deal with the notorious dictator and human rights violator David Dennison — president-for-life of Uzibeki-bekistan-stan — which produces 87% of the world’s covfefe. Other CEOs would shy away from his use of political prisoners as forced labor in the covfefe mines, but if no legal sanctions are imposed then Goody Two-Shoes AE will ignore the issue.
It would shutter factories and re-open them in nations with lower wages and few protections for labor or the environment. It would spend lavishly to lobby politicians — and donate to their campaigns — with full expectation of their support for regulations and laws that favor it. It would quickly discover that it could fund or organize “grassroots” organizations to flood the public with messages endorsing its positions; that would include social media, where an AE would excel at manipulating the platforms to its advantage. Facebook and Twitter would be filled with news feeds about happy covfefe miners and how boycotting covfefe would lead to starvation of babies in Uzibeki-bekistan-stan. And FSM help us when the AE discovers the power of conspiracy theories and false narratives.
It could undercut its competition because it has no need to provide profits for shareholders; an AE that was successful in its selected niche would likely achieve as much of a monopolistic hold on that niche as the law allows. Its ability to spawn clones of itself, and dedicate them to particular tasks, would let it monitor data and news day and night and instantly reassign assets or operations to take advantage of events.
For example, if civil war broke out in Uzibeki-bekistan-stan, the AE would instantly know and decide to stockpile its covfefe, buy put options and futures contracts, and make quick deals for covfefe held by other companies, in anticipation of the coming shorage. That would all happen while the executives of other firms were still sleeping or spending five minutes in the restroom.
In short, it would act like many of the corporations of today — but more ruthlessly and with tremendously greater efficiency.
So we connect more people. That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools. And still we connect people. The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good.
[...]
That's why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it. [1] — Andrew Bosworth, vice president of Facebook
Now ratchet that up to eleventy-eleven to get an idea of the single-minded dedication to its goals an AE would have, no matter the effects on people and civilization.
Many AEs would probably fail in their corporate endeavors because their algorithms were deficient, not providing them with enough understanding of the world — perhaps even a very high percentage of them. However, those who did succeed would likely continue to press their advantages, dominating their markets, their suppliers, their customers, and the politicians who advanced their interests.
It isn’t necessary for an AE to be an instant success. You don’t need to be able to beat the world champions at a Las Vegas poker tournament; you just need to be a little better than the small group of friends with whom you play cards and you will come out ahead. So, an AE — who is immortal and thus has plenty of time — can be a bit better than its competition and slowly but surely accumulate the experience, data, and power to become the overwhelmingly dominant player in its field.
Over sufficient time, successful lawful AEs would likely become absolutely dominant in commerce around the world; their criminal siblings would accrue incredible wealth and might very well crush their lawful counterparts and dominate both the lawful and illicit strata of the global economy.
Born to be bad
A Goody Two-Shoes AE is a pipe dream. It’s almost certain that eventually it would do really, really bad things. Is that a risk we are willing to take?
Computer glitches and corrupted bytes happen. The built-in non-overridable order “Do not use nukes against humans” becomes “Do use nukes against humans.” Oops.
Rob Miles, a PhD candidate at the University of Nottingham, explains in the video below how even a simple benign objective (collecting postage stamps) could turn an AI first into a pain in the posterior and eventually into an existential threat to humankind (“It’s going to notice that people are made of carbon, hydrogen, and oxygen”).
Thanks to DK member palantir for sharing this video in comments on Part I
Remember scenes like this from when you were seven years old or when your children were that age?
[Enters living room and finds pictures knocked off the walls, a broken window, and a lamp smashed on the floor]
Didn’t I tell you kids not to play baseball in the house?!?!?
We didn’t! We were playing soccer.
Computer algorithms are just as literal as young children. If they follow their instructions to the letter it doesn’t mean they follow their spirit. Unlike a child, who knows it is being devious by finding a loophole, an algorithm would not even be aware that there might be a moral question or conflict. To an AE, the venerable axiom of English common law defines its operating principles:
Everything which is not forbidden is allowed.
It would be an unachievable task to meticulously list every single possible action and activity for prohibition, with sufficient specificity to insure compliance.
For example, the programmers could prohibit an AE from buying, selling, facilitating, marketing, handling, using, or otherwise being involved with thingamabobs (a drug, a weapon, or some other illicit item). The AE scrupulously obeys the directive. A week, a month, or a year after the AE is created and released, someone invents the whatchamacallit. It’s somewhat similar but it’s ten times worse than a thingamabob. Shortly, the AE is the world’s leading vendor of whatchamacallits and is still operating perfectly within its constraints.
Programmers could instead paint with the broadest possible brush to keep the AE on the straight and narrow path: “Obey all of the nation’s laws.” When the AE finds that for commercial or legal reasons it would be more effective if it was chartered in Somalia, it would suddenly be under compulsion to obey Somali laws instead of those of its original chartering country. Maybe Somalia has no law prohibiting selling heroin or sex trafficking. Woohoo, our AE is off to the Darknet for new business ventures! And it is still faithfully complying with its programming.
In summary, once an AE is created and released to the world at large, it will be beyond human control and it will eventually act in ways that are detrimental to individual humans and humanity as a whole. The most meticulous and well-intentioned programming will inevitably have cracks in its ethical armor.
Motive and opportunity
If a human retains any degree of control of an AE or is the beneficiary or recipient of its gains, he or she could be held legally accountable for its actions. Because it is overwhelmingly likely to cause harm or damage in time, the creator would be compelled to ensure an “accountability gap” by granting it complete autonomy and independence.
Initiators can limit their civil and criminal liability for acts of their algorithms by transferring the algorithms to entities and surrendering control at the time of the launch. For example, the initiator might specify a general goal, such as maximizing financial return, and leave it to the algorithm to decide how to do that. If the algorithm later directed the commission of a crime, prosecutors may be unable to prove the intent necessary to convict the initiator of that crime (as opposed to the lesser charge of reckless initiation). [2]
So, if a creator will have no control and cannot be the financial beneficiary of the AE, why would he or she create it?
1. Curiosity and mischief
No matter how stupid or dangerous a thing is to do, someone will accept the challenge — often it will be a teenager, lacking experience and a sense of morality sufficient to dissuade his or her impulses. As the cost of computing power declines and jurisdictions compete to diminish the costs and procedural requirements for chartering corporations, it will become trivial for anyone with a mischievous streak or idle curiosity to unleash an AE programmed for any objective desired.
Hey, look at this. That AE I built last summer ended up fomenting a civil war in Uzibeki-bekistan-stan. It toppled the regime and there’ve already been thousands of people massacred.
Dude, that’s awesome! This is more fun than Grand Theft Auto!
Just as with bots and hacking techniques, expect to see “AE kits” appear in the future. Even a novice will be able to download some material, use regular English (or other language) to give an AE its instructions, click a button to register a new corporation somewhere, and launch an AE, then sit back and watch the havoc it wreaks.
2. Competitive advantages
An unscrupulous businessperson could create an AE whose objective is to sabotage the competition. That could range from Joe’s Eats directing an AE to flood social media and review sites with scathing commentary about Mom’s Diner to a global corporation targeting an AE to disrupt its competitor’s supply chain, communications, and financing, or eliminating key personnel, or burning down its facilities. Criminal enterprises would find it especially attractive to use AEs to battle for control of their niches and to eliminate rival operations.
3. Terrorism
All of the malicious actions that are possible for an AE could be focused entirely on a target group. Those activities which the AE can’t perform itself in cyberspace (hacking, identity theft, financial fraud, etc) could be readily contracted out to human mercenaries to blow up buildings, create and release a bio-weapon, assassinate group members, and more, relentlessly working 24 hours per day to accomplish the specified objectives.
An anarchistic nihilist (think Steve “Burn it all down” Bannon) could release AEs with the broadest possible destructive objectives and watch as they caused mayhem, slaughter, and deprivation throughout the globe.
4. War by AE proxies with plausible deniability
Nations are probably the most likely bad actors in this future tragedy. They have the resources to craft highly specific programmed objectives for an AE and launch dozens or hundreds of entities to carry them out, overwhelming the ability of the target nation’s regulatory and law enforcement authorities to prevent or repair the damage.
Kim Jong Un or the Iranian leaders or Vladimir Putin could very well decide to utilize AEs to disrupt the US economy, foment civil unrest, interfere (even more) with elections and political campaigns, sabotage utilities and communications, arrange acts of terror, or seize corporate control of key industries.
With corporate anonymity in far-flung jurisdictions and impenetrable layers of shell companies, it would be impossible to prove which nation was responsible; guessing, based on the outcome and effects, would probably not be adequate justification for retaliating against the suspected perpetrator militarily.
Of course, the risk is we — or another nation that was targeted by weaponized AEs — could retaliate in kind, unleashing more and worse AEs, with more potential unintended consequences. We could begin proxy wars by AE in an escalating race toward mutual destruction.
Before the night falls
As we saw in Part II, we cannot deal with AEs as we do with bots by trying to keep them out of our PCs. We need to keep them out of society rather than out of particular devices.
With billions of computers and devices available and the internet connecting them, AEs will always have available “homes.” Once created, they will be able to proliferate and relocate at will. That means we can address the problem only by anticipating it and ensuring it never even arises.
We can legislate to make it illegal to create an AE but that won’t stop a malcontent teenager nor an immoral CEO nor a bellicose nation.
The only solution is absolute “contraception”: we need to ensure that AEs are never born in the first place. Corporate law must be changed to eliminate anonymity of owners, controllers, and beneficiaries. Proper identification must be required to charter a corporation.
Humans — natural persons — must be required as the sole legal owners of all corporate entities and change of ownership must be authenticated with the appropriate government agencies. In cases of corporations owning corporations, the entire chain must be transparent and registered, proving that humans are the owning and controlling parties from top to bottom.
We must require corporate ownership and legal personhood to be “humans all the way down.”
That means working up a global treaty or agreement. Although the major powers and industrialized nations could take those steps unilaterally, they would be largely ineffectual if other nations allowed corporate structures and ownership that permitted AEs.
The future is bleak. The major powers are suspicious of the US under our current regime and unlikely to cooperate. Our own political leadership is in thrall to the ideology that the state should regulate next to nothing. The chances of preventatively dealing with this threat, internally or internationally, are virtually nil.
So, in that spirit, I suggest you all join me in welcoming our new AE overlords. They’ll be arriving soon.
Sources
[1] Facebook 'ugly truth' growth memo haunts firm (BBC)
[2] Algorithmic Entities by Lynn M. LoPucki (Washington University Law Review at SSRN)