Being on the losing side of this battle—so far—we progressives and our fellow democrats understand all too well how the GOP has gained so much control over statehouses and in Congress.
Partisan gerrymandering is at an all time high, and the upcoming census of 2020 is sure to be a battleground over which party (if either) controls how our populace is counted and how voting districts are assigned.
More is at stake in the coming decade than ever before.
The courts have been reluctant to intervene in the redistricting process for all but the most egregious cases (typically involving racial bias), seeing this process as a purely political matter in which the judicial branch historically has played a limited role.
I recently became aware of an article about a mathematician who has spent years reviewing the redistricting process who, along with some fellow mathematicians, may remove some of that reluctance.
Basically, one of the reasons that courts have not intervened, even in seemingly outrageous redistricting plans, is due to the lack of an objective measure of when a newly drawn district is “fair” or not.
Fairness often is in the eye of the beholder—cases have been made for differing measures of what’s fair, and each has its proponents and detractors, depending on whose ox is being gored.
But, Jonathan Mattingly and his gang appear to have come up with a set of mathematical tests that, when applied to a proposed redistricting, claims to offer a more objective measurement of fairness.
Mattingly’s “compactness score” allows testing of the fairness of a proposed districting plan, in a way that at least one court has accepted as meaningful.
In December 2016, a Wisconsin court considered a statistical analysis when ruling against partisan gerrymandering. And Mattingly will serve as an expert witness in a case this summer in North Carolina.
Of course, it’s no surprise that legislators are loath to consider such plans where it might reduce their party’s influence, but that may be changing.
US legislators have been reluctant to embrace a mathematical solution to gerrymandering. But current court cases show that pressure to do so is mounting, Gall says. In the Wisconsin case Whitford v. Gill, federal judges used the efficiency gap to rule that the state’s voting districts represented an unconstitutional partisan gerrymander. The case could end up before the Supreme Court later this year.
Mattingly’s group is not the only one working on this problem, and the “compactness score” is not the only objective measure being proposed.
Political scientist Nicholas Stephanopoulos at the University of Chicago, Illinois, takes a much simpler approach to measuring gerrymandering. He has developed what he calls an “efficiency gap”, which measures a state’s wasted votes: all those cast for a losing candidate in each district, and all those for the victor in excess of the proportion needed to win. If one party has lots of landslide victories and crushing losses compared with its rivals, this can be a sign of gerrymandering. The simplicity of this metric is a strength, says Wang.
Does the census of 2020, and the resulting redistricting processes that take place in each state as a result, portend a more equitable system of democracy for the United States? Perhaps. But, as has happened in the past (the census of 1920, feeding into a highly partisan Congress, led to a suspension of the reapportionment process), politicians often have clever ways to get around the will of the people for a fair election system.
To make the outcome a positive as likely as possible, please follow and support the work of the Electoral Integrity Project.