• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Vote-counting: election-system analysis

Let's now consider real-world results.

I have collected the ballots for all recent IRV-using elections in Minneapolis MN, Maine, and Burlington VT. That latter city used IRV for its mayor in 2006 and 2009, but the latter election produced some weird results, and the city dropped IRV for its previous system: a separate runoff election.

For Maine, the ballots are only available for elections where the top preferences did not give a majority winner -- 9 of the recent elections for Congressional and state positions.

For Minneapolis, that is 66 elections, which is all elections that used IRV. Of these, 44 had a majority winner in the top preferences, leaving 22 without one.

I then used several vote-counting methods on these ballot sets: FPTP, top-two, IRV, Borda, STAR, and Condorcet.

For the Burlington 2006 mayoral election, no candidate got a majority, but all the methods agreed on the winner: Bob Kiss (Progressive).

For the Burlington 2009 one, however, the methods gave different results:
  • Andy Montroll (Democratic) -- Borda, STAR, Condorcet
  • Bob Kiss (Progressive) -- TopTwo, IRV
  • Kurt Wright (Republican) -- FPTP
  • Dan Smith (Independent)
  • James Simpson (Green)
  • (write-ins treated as one candidate)
I'm listing them in their Condorcet-sequence order -- each one is a one-on-one winner relative to the ones below.
 
Of the 9 Maine elections without majority winners, 8 of them had all methods agreeing. The exception was the 2018 general election for US House district 2.

The first round of IRV counting gave
('REP Poliquin, Bruce', 134358)
('DEM Golden, Jared F.', 132145)
('Bond, Tiffany L.', 16650)
('Hoar, William R.S.', 6996)

BP was ahead, but not by much, and not enough to give him a majority of votes.

Dropping the two lowest ones gave
('DEM Golden, Jared F.', 142664)
('REP Poliquin, Bruce', 139238)

This gave JG a victory, and the Maine Republican Party has been smarting over this loss ever since.

Of the six methods that I've looked at, BP got a victory with FPTP and Borda, and JG with TT, IRV, STAR, and Condorcet. In fact, I find Condorcet sequence

'DEM Golden, Jared F.', 'REP Poliquin, Bruce', 'Bond, Tiffany L.', 'Hoar, William R.S.'

So IRV averted a loss for JG by vote splitting.
 
Last edited:
Turning to Minneapolis, I find ballot sets for all 66 of its IRV elections. The numbers:
  • All: 66
  • First-round majority winner: 44
  • No such winner: 22
  • All methods agreeing on a winner: 16
  • Different methods having different winners: 6
  • FPTP differing from the others: 3
  • Borda differing from the others: 2
  • FPTP and Borda differing from the others, though agreeing with each other: 1
In all cases, the IRV winner was also the Condorcet winner.

Every RCV Election in the Bay Area So Far Has Produced Condorcet Winners - FairVote looked at all the IRV elections there before the article's dateline of 2017 Jan 6.

  • All: 138
  • IRV winner did not get a majority in the first round: 46
  • IRV winner was ahead (if ahead) by less than 5% in the first round: 17
  • IRV winner was behind in the first round (did not win FPTP): 7
  • IRV winner did not win a virtual top-two runoff: 2
In all cases, the IRV winner was also the Condorcet winner.

In the first round,
  • Runner-up had at least 2/3 as many votes as the winner: 51%
  • Runner-up was less than 20% behind the winner: 20%
  • Runner-up was less than 5% behind the winner: 11%
How many candidates, how many elections, and what average fraction of the vote?
  • One: 27, 96.4%
  • Two: 29, 66.5%
  • Three or more: 82, 57%
  • Total: 138, 66.7%
 
Lack of Monotonicity Anomalies in Empirical Data of Instant-runoff Elections: Representation: Vol 0, No 0
The instant runoff voting (IRV) method fails the monotonicity criterion. This means in an IRV election it is theoretically possible for a winning candidate to lose an election if certain ballots are changed to raise the otherwise winning candidate higher on the ballot. We analysed data from over 100 real-world IRV elections to ascertain if any demonstrated a monotonicity anomaly. Despite theoretical research indicating potentially high incidence of such voting anomalies, our investigations found only one election showing a monotonicity anomaly: the 2009 Burlington mayoral election. Burlington was also the only election resulting in different winners using IRV, Borda Count, and Pairwise Comparison voting methods.
"Pairwise Comparison" = Condorcet

It is paywalled, so I could not get any further details, like how they looked for monotonicity violations. Did they randomly change some ballots?


Here is an example of IRV monotonicity violation. Pizza toppings again: sausage, anchovies, and peppers.
  • 28 of S, A, P
  • 5 of S, P, A
  • 30 of P, A, S
  • 5 of P, S, A
  • 16 of A, P, S
  • 16 of A, S, P
First round: S 33, P 35, A 32
Anchovies drop out
Second round: P 51, S 49
Winner: peppers

These ballots have Condorcet sequence A, P, S, and Borda count A 222, P 191, S 187

But let's help peppers a bit by making this change:
  • 3 of S, P, A
  • 7 of P, S, A
First round: P 37, A 32, S 31
Sausage drops out
Second round: A 60, P 40
Winner: anchovies

These ballots have Condorcet sequence A, P, S, and Borda count A 222, P 193, S 185

What happened? In the first set, the Condorcet winner dropped out from not having enough first-place support. The changes for the second set made one of the other candidates drop out, allowing the Condorcet winner to advance and win.

The Burlington 2009 election was an example of this sort of behavior, where Condorcet winner Andy Montroll dropped out because of lack of first-place support. If some of Kurt Wright's supporters gave Bob Kiss a higher preference, then it would have been KW who dropped out, and AM would have won in the second round. But that would have meant a lot of right-leaning voters voting for a left-leaning candidate, and that would also have required a lot of polling to see whether this was worth doing.
 
A voting theory primer for rationalists - LessWrong by Jameson Quinn
Voting theory, also called social choice theory, is the study of the design and evaulation of democratic voting methods (that's the activists' word; game theorists call them "voting mechanisms", engineers call them "electoral algorithms", and political scientists say "electoral formulas"). In other words, for a given list of candidates and voters, a voting method specifies a set of valid ways to fill out a ballot, and, given a valid ballot from each voter, produces an outcome.
"Democratic" means:
  • There are many voters
  • There can be more than two candidates
Usually also including:
  • Anonymity; permuting the ballots does not change the probability of any election outcome.
  • Neutrality; permuting the candidates on all ballots does not change the probability of any election outcome.
  • Unanimity: If voters universally vote a preference for a given outcome over all others, that outcome is selected. (This is a weak criterion, and is implied by many other stronger ones; but those stronger ones are often disputed, while this one rarely is.)
  • Methods typically do not directly involve money changing hands or other enduring state-changes for individual voters. (There can be exceptions to this, but there are good reasons to want to understand "moneyless" elections.)
In that last one, it could be either real money or some system of "vote tokens" or "vote credits". Something mentioned in  Quadratic voting

"The fact is that FPTP, the voting method we use in most of the English-speaking world, is absolutely horrible, and there is reason to believe that reforming it would substantially (though not of course completely) alleviate much political dysfunction and suffering."

FPTP = single choice in one round of voting

Its only virtue is that it is very simple to implement. That may be why much interest in alternative voting systems and vote-counting algorithms has come about only in the last half-century or so, when computers became common. A computer can easily do the grunt work of counting with even a very complicated algorithm. I myself have implemented several vote-counting algorithms, including some very complicated ones.
 
Some of the earliest work on voting theory was done by Ramon Llull around 1300, but it was lost until recently. Fast forwarding to the last 18th cy., we have Jean-Charles de Borda and Nicolas de Condorcet in the French Academy. Author Jameson Quinn imagines them debating:
Condorcet: "Plurality (or 'FPTP', for First Past the Post) elections, where each voter votes for just one candidate and the candidate with the most votes wins, are often spoiled by vote-splitting."

Borda: "Better to have voters rank candidates, give candidates points for favorable rankings, and choose a winner based on points." (Borda Count)

Condorcet: "Ranking candidates, rather than voting for just one, is good. But your point system is subject to strategy. Everyone will rate some candidate they believe can't win in second place, to avoid giving points to a serious rival to their favorite. So somebody could win precisely because nobody takes them seriously!"

Borda: "My method is made for honest men!"

Condorcet: "Instead, you should use the rankings to see who would have a majority in every possible pairwise contest. If somebody wins all such contests, obviously they should be the overall winner."
Condorcet himself then discovered a problem with his method: circular preference cycles. A > B, B > C, C > A.

Then Kenneth Arrow's impossibility theorem about ranked methods:
  • Ranked unanimity: if every voter prefers X to Y, then the outcome has X above Y.
  • Independence of irrelevant alternatives: If every voter's preferences between some subset of candidates remain the same, the order of those candidates in the outcome will remain the same, even if other candidates outside the set are added, dropped, or changed.
  • Non-dictatorial: the outcome depends on more than one ballot.
Using this idea, it wasn't long until Gibbard and Satterthwaite independently came up with a follow-up theorem, showing that no voting system (ranked or otherwise) could possibly avoid creating strategic incentives for some voters in some situations. That is to say, there is no non-dictatorial voting system for more than two possible outcomes and more than two voters in which every voter has a single "honest" ballot that depends only on their own feelings about the candidates, such that they can't sometimes get a better result by casting a ballot that isn't their "honest" one.
 
An alternative to an axiomatic method, where one tries to find out whether some method satisfies some criteria and does not satisfy some other ones, is a statistical one, where one randomly generates lots of simulated elections with lots of simulated voters, and then uses various vote-counting methods on the results.

Author Jameson Quinn then notes the work of Warren Smith on various vote-counting systems, assessing them with a measure that JQ calls "Voter Satisfaction Efficiency". He found that rated voting almost always did the best, while FPTP almost always did the worst.

JQ himself has done some research on this issue: Voter Satisfaction Efficiency (VSE) FAQ | Jameson Quinn He confirms WS's results:

FPTP < IRV < approval < rated

He looked at some other methods also, and they rank along the IRV - approval - rated range of values. Two of the best are STAR and ranked pairs.

JQ then mentions the 2000 US Presidential election, where George Bush II won by a hair over Al Gore -- and Ralph Nader got many more votes than the margin.
In the years since, substantial progress has been made. But we activists for voting reform still haven't managed to use our common hatred for FPTP to unite behind a common proposal. (The irony that our expertise in methods for reconciling different priorities into a common purpose hasn't let us do so in our own field is not lost on us.)
 
JQ then considers 5 "anti-patterns" or pathologies.

1. Dark Horse -- a candidate wins because nobody expects them to. A problem with Borda, not so much with other methods.

2. Vote splitting, "spoiled" elections -- a problem with FPTP, not so much with other methods.

3. Center squeeze -- a Condorcet winner loses in top-preference votes. That can happen if that winner was the second preference of most voters. It happened in the 2009 Burlington VT mayoral election. Related to failures of the Favorite Betrayal Criterion.

4. Chicken dilemma -- two candidates must cooperate to win, but of those two, whichever one cooperates less is the winner. A version of it happened in the 1800 US Presidential Election. Related to failures of Later No Harm criterion. But LNH and FBC are incompatible, so one has a dilemma. Systems like STAR and 3-2-1 minimize it, however.

5. Condorcet cycles -- A > B > C > A among top candidates. Rare in real-world elections, however: San Francisco Bay Area, Burlington VT, Maine, Minneapolis MN. Also rare in the polls at the Condorcet Internet Voting Service. I looked at the public polls there, and the closest I found was a first-place tie. But there were some Condorcet cycles among lower-ranked poll options in some of the polls there.
 
JQ then had some words on various systems and their supporters.
FPTP (aka plurality voting, or choose-one single-winner): Universally reviled by voting theorists, this is still favored by various groups who like the status quo in countries like the US, Canada, and the UK. In particular, incumbent politicians and lobbyists tend to be at best skeptical and at worst outright reactionary in response to reformers.

...
IRV supporters tend to think that discussing its theoretical characteristics is a waste of time, since it's so obvious that FPTP is bad and since IRV is the reform proposal with by far the longest track record and most well-developed movement behind it. Insofar as they do consider theory, they favor the "later-no-harm" criterion, and prefer to ignore things like the favorite betrayal criterion, summability, or spoiled ballots.

Approval voting: ... Because of its simplicity, it's something of a "Schelling point" for reformers of various stripes; that is, a natural point of agreement as an initial reform for those who don't agree on which method would be an ideal end state. ...

Condorcet methods: ... In my view, these methods give good outcomes, but the complications of resolving spoil their theoretical cleanness, while the difficulty of reading a matrix makes presenting results in an easy-to-grasp form basically impossible. ...

Bucklin methods (aka median-based methods; especially, Majority Judgment): ... Their VSE is not outstanding though; better than IRV, plurality, and Borda, but not as good as most other methods.

Delegation-based methods, especially SODA (simple optionally-delegated approval): It turns out that this kind of method can actually do the impossible and "avoid the Gibbard-Satterthwaite theorem in practice". ...

Rated runoff methods (in particular STAR and 3-2-1): These are methods where rated ballots are used to reduce the field to two candidates, who are then compared pairwise using those same ballots. They combine the VSE advantages of score or approval with extra resistance to the chicken dilemma. These are currently my own favorites as ultimate goals for practical reform, though I still support approval as the first step.

Quadratic voting: Unlike all the methods above, this is based on the universal solvent of mechanism design: money (or other finite transferrable resources). Voters can buy votes, and the cost for n votes is proportional to n². This has some excellent characteristics with honest voters, and so I've seen that various rationalists think it's a good idea; but in my opinion, it's got irresolvable problems with coordinated strategies. I realize that there are responses to these objections, but as far as I can tell every problem you fix with this idea leads to two more.
Condorcet methods have an additional problem. Those with nice properties, like Schulze beatpath or ranked pairs, are rather complicated to explain, though Schulze is relatively easy to code. Ranked pairs requires doing a cycle detection at each step, and that is rather complicated to code.
 
JQ then briefly discusses multiseat systems. He states that he is the author of the E Pluribus Hugo voting system for find the Hugo awards for the best science-fiction productions of each year.

5 general voting pathologies: lesser names of Moloch - LessWrong - goes into more detail about what I'd mentioned in my previous post.

Multi-winner Voting: a question of Alignment - LessWrong

"In particular, closed-list proportional methods, which offload intra-party selection to some partisan mechanism probably dominated by insiders, are a bad idea."

First question: should there be parties at all? Though some people would disagree, I'd suggest that parties play an inevitable, and in some regards a positive, role in a political process. Yes, they do have bad effects, such as mind-killing tribal thinking; but they also have good ones, such as serving as useful cognitive heuristics for voters, and possibly allowing intraparty sorting to have more of a focus on qualifications and ability rather than ideology. Furthermore, even if you do believe they are bad on net, getting rid of them is really hard. Metaphorically speaking, if you try to design a voting system that bars the door against parties, you may find that they just make a hole in a load-bearing wall as they force their way in anyway.
The US Founders didn't want political parties, judging from those who expressed opinions on that subject, and they wrote no provision for parties into the US Constitution. But during George Washington's first term, the politicians started to divide themselves up into parties.
Second question: how many parties should there be? Too few, and you get a stagnant "monopolistic" or "duopolistic" system in which zero-sum thinking leads to negative-sum outcomes. (For a real-world example, look at the USA.) Too many, and you encourage politicians who make narrow, single-issue appeals. (For a real-world example, look at Israel.)
A small number of parties leads to forming coalitions before elections, and a large number coalitions after elections.

A small number means large parties, and large parties can get factions in them -- parties inside of parties.

Some of the most interesting politics in the US over the last decade has occurred inside of parties. First it was the Republican Party having the Tea Party in it, and now the Democratic Party has a left/progressive faction in it. "Primary" as a verb was coined about the Tea Party's efforts, winning in primary elections against established Republicans. The biggest success there was Dave Brat against Eric Cantor, someone in the House Republican Party's leadership. Progressives in the Democratic Party have also resorted to that tactic, with Alexandria Ocasio-Cortez unseating Joe Crowley, someone in the House Democratic Party's leadership.
 
Then the effective number of parties, apparently using formula

Eff Num = reciprocal of sum over parties with p for each party p^2

JQ's ideal is more than two parties, but each one having more than one issue.

He then lists several building blocks of proportional methods, like deweighting of ballots that elected winners -- reducing their weight to make a proportional outcome.

JQ then mentions
  • PLACE: Proportional, Locally-Accountable Candidate Endorsement
  • EPH: E Pluribus Hugo, what's used in those science-fiction awards
  • SODA: Simple Optionally-Delegated Approval
  • 3-2-1
He proposes some criteria. Some are rather common-sensical, like minimizing weighted votes and being simple for voters.

He states "Ranked ballots for more than about a dozen candidates are intolerably complex for most voters." That means that one must avoid having STV districts that are too large.

He also wants some locality in representation and a moderate number of parties.

He states: "This is obviously a judgment call, but I think that a method that is any threat to an incumbent of average popularity is a non-starter. Insofar as outcomes are different, the losing incumbents should be among those with below-average popularity."

I think that he means that an incumbent should stay in office if there are no challengers more popular than them.

"Have a precinct-summable counting process" - one can find subtotals locally.

Then he puts in a plug for PLACE, something that he thinks will satisfy all these criteria.
 
In the comments for the first one, JQ says "2) As I hinted, I think that the academic literature on this tends to focus more on the axiomatic/Arrovian paradigm than it should. I suspect that there is some political science research that relates, but aside from a few simple results on spoiled ballots under IRV (they go up) I'm not familiar with it."

But that's good for showing that some method satisfies some criterion 100% of the time. If it does so only 99% (say), then the 1% of failure can be very embarrassing. Like Burlington 2009 for IRV with its "center squeeze" failure.

"Organizations are probably more able to tolerate "complicated" voting methods — especially organizations of "nerds", such as Debian or the Hugo awards. But my intuition in this area is based on anecdotes, not solid research."


PLACE FAQ - Electowiki - "Proportional, Locally-Accountable, Candidate Endorsement"

Rather complicated. Each candidate designates some fallback candidates in case they don't win, and each voter votes for one candidate. Candidates that lose get their votes transferred. There's more in it, but I'm confused.


SODA voting (Simple Optionally-Delegated Approval) - Electowiki

Approval voting with an addition: one can delegate one's vote to a candidate if one wants to. That candidate then does some voting.


3-2-1 voting - Electowiki
  • Find 3 semifinalists: the candidates with the most “good” ratings.
  • Find 2 finalists: the semifinalists with the fewest "bad" ratings.
  • Find 1 winner: the finalist who is rated above the other on more ballots (like a virtual runoff).
 
The Voting System | The Hugo Awards
The Hugo Awards - What's New in 2017 - Worldcon 75

It is a two-step process.


In the first step, everybody makes their nominations, at most five per category.

The votes for each nominee are then added up.

For the selection of nominees for the next step, they are added, with the refinement that each nominee gets a vote of 1/n for a ballot with n nominees.

Then the two nominees with the fewest points in this count are compared, using the first count. Whichever one gets the fewest points is then dropped from the count.

These two steps are repeated, with dropped nominees ignored in the count of nominees in each ballot. The repetition ends when six nominees remain.

If any nominees are disqualified or withdrawn, then the last-dropped nominees take their place.


In the second step, everybody takes part in an Instant Runoff Voting election to find the winner, an election that includes "No Award". The winner is dropped from their ballots and the IRV counting process is repeated on them to get the second-place winner. This process is repeated until the fifth-place winner has been found.

If "No Award" wins in the count for the winner, then no award is given that year and the counting stops.
 
RangeVoting.org - DH3 pathology
That is, Dark Horse + 3 Rivals
It is simply this. Suppose there are 3 main rival candidates A, B, & C, who all have some good virtues. This happens a lot. (In fact, whenever it doesn't happen, the situation is uninteresting – only 2 real contenders – and we almost might as well just be using the plurality voting system.) Let us suppose support is roughly equally divided among those three, say 31%, 32%, and 37%, although the precise numbers do not matter much. Suppose also there are one or more additional "dark horse" candidates whom nobody takes seriously as contenders because they stink. For simplicity assume there is only one dark horse D, but what we are going to say also works (indeed works even more powerfully) with more than one.

Now, what happens? The A-supporters say to themselves: "We are in trouble. Polls suggest A is going to lose if we just vote A>B>C>D as is our honest opinion. But if we exaggeratedly vote A>D>B>C downgrading A's main rivals as far as we can, then maybe A will have a chance." The B-supporters say "those rotten A-supporters for sure are going to exaggerate and effectively get twice the A-versus-B discriminating power as if they were honest. We cannot sit still and just take that. We have to fight back by also exaggerating: B>D>C>A." And similarly the C-supporters say "we will not just sit back and be robbed of our deserved victory by those dishonest exaggerating scum. We will also exaggerate: C>D>A>B." (And by the way, they are completely right. C would definitely lose to A or B if they just sat there.)
In effect, all of the voters decides to bury the candidates that they dislike by voting for D after their top preference.

D wins in the Borda count and in Condorcet methods, and this seems much like center squeeze.
 
I'll now consider nonpartisan multiseat elections.

Party-list proportional representation is obviously a partisan one, but it is possible to get proportionality with nonpartisan methods?

Let's see what we have.

A very simple nonpartisan method is general ticket: voting for complete slates of candidates in single-seat fashion. Each slate has a candidate for each seat.

But GT is not very fine-grained, and FPTP GT, using FPTP on the slates, is worse than FPTP itself. So let us turn to methods where one votes for individual candidates.


Bloc vote (or block vote) is essentially a form of limited approval voting, where the maximum number of candidates that one can vote for is the number of seats.

Let us consider what happens with a partisan vote. The candidates will be divided up into slates with a candidate for each seat, and the voters will be divided up into factions that each vote the same way about each slate.

In a partisan vote, each voter votes for all the members of one of the slates, and each faction votes for one slate each. That makes the election reduce to FPTP GT.


Relaxing the limitation on voters' number of candidates gives full-scale approval voting, and a partisan vote gives approval-vote GT, using approval voting on the slates.


Going the other way, with each voter voting for fewer candidates than seats, is called limited voting. The ultimate in that is voting for only one candidate: strictly limited voting or single non-transferable vote.

For a partisan vote, each faction's voters will have to vote for different subsets of that faction's slate, subsets that must average out to be as close to the slate as possible.

Here also, one gets FPTP GT.


For IRV, one can stop when one gets S candidates, for S seats. For a partisan vote, each voter will have to use a different ordering of each slate, so that all each slate's candidates average out as having the same ranking. This reduces to IRV GT.

Likewise, many single-seat methods return rankings or ratings of all the candidates, and one can use the top S candidates in these. In a partisan vote, this reduces to using that method in GT.

An alternative is to find a winner with a single-seat method, remove that winner from the ballots, and then repeat until one has filled all the seats. With a partisan vote, this also reduces to using that method in GT.


This might seem to mean that nonpartisan methods are doomed to approximate general ticket.
 
For multiseat IRV, one can remove winners along with losers. One recognizes a winner by that candidate having more top-preference votes than some victory quota Q, a quota that is usually (total votes)/(S+1).

In a partisan vote, multiseat VQ-IRV again reduces to general ticket, what may be called VQ-IRV GT.

But VQ-IRV offers a way of avoiding GT and being at least somewhat proportional.

Reducing the strength of the ballots that contributed to each victory. This is often justified by saying that some of their work is done, but in a partisan vote, this provides opportunities for slates other than the winner's slate to also have winners.

This can be done in either of two ways: dropping Q of the victory ballots, or reweighting all of them. Reweighting? This process may also be called deweighting or downweighting. In reweighting, all the ballots originally start off with a weight of 1. Victory ballots get their weights multiplied by (1 - Q/V), where V is the number of those victory ballots.

This is what is usually called the Single Transferable Vote or STV.


It is used in some places, but it has the problem that one has to rank a lot of candidates. But reweighting can also be extended to multiseat approval and rated voting, and voting may be easier there.
 
Reweighting of rated votes for proportional representation:
Warren Smith thought that he invented the method, but it goes back to Thorvald N. Thiele in Sweden in 1895 (approval, K = 1).

Reweighted Range Voting is multiseat rated voting where winners are found one by one, and where the ballots' weights are recalculated after each victory. Each ballot's weight is K/(K + V/M) with K some constant, V is the sum of the ratings of the victories that that ballot has contributed to, calculated with the original values, and M is the maximum rating (minimum rating assumed to be 0).

The weighting closely parallels some highest-averages methods of proportional allocation:
  • D'Hondt / Jefferson: K = 1
  • Sainte-Laguë / Webster: K = 1/2
The averages in these methods are given as V/(K + S) for number of votes V and number of seats already allocated S.

Whichever party has the highest average gets a seat in each round of allocation.

Larger K tilts toward larger parties and smaller K tilts toward smaller parties.
 
An alternative is Single distributed vote - Electowiki

The votes may be expressed as matrix R(v,c) for voter v and candidate c

For each round, the votes are reweighted: R(v,c) gets R(v,c) * (K*R(v,c))/(K*R(v,c) + V) (with original values)

where V = sum over winning c's for ballot v of R(v,c) (with original values)

A more general version is

R(v,c) gets R(v,c) * (Radj(v,c))/(Radj(v,c) + V)
where
Radj(v,c) = A + B*R(v,c)

For RRV, A = M*K, B = 0
For SDV, A = 0, B = K
 
 Apportionment paradox of proportional-allocation algorithms.

The US House had earlier used a largest-remainder algorithm for dividing seats among the states, and this had some odd consequences.

After the 1880 census, the chief clerk of the US Census Bureau calculated allocation of seats among the states for House sizes from 275 to 350. He discovered that going from 299 to 300 reduced the state of Alabama's number of seats from 8 to 7. Thus, the "Alabama paradox". Something similar happened in 1900, when someone at the Census Bureau calculated that the state of Colorado would have 3 seats for total seats from 350 to 400, except for 357 seats, where CO would have only 2 seats.

This happens when a state gets pushed down the line by other states having higher remainders.

The "population paradox" happened in 1900, when Virginia lost a seat to Maine, despite having a faster-growing population.

The "new-states paradox" happened in 1907, when Oklahoma was admitted as a state. The House was enlarged to give OK an appropriate number of seats, but when the House was reallocated, New York lost a seat while Maine gained one.

However, largest-remainder algorithms satisfy something called the quota rule. It states that the number of seats that a party/state/whatever gets should be either be a rounding-down (lower quota) or a rounding-up (upper quota) of (total)/(seats) -- (natural quota).


In 1983, mathematicians Michel Balinski and Peyton Young proved an impossibility theorem for proportional-allocation algorithms.

No allocation algorithm can satisfy all these criteria:
  • Quota rule
  • No Alabama paradox
  • No population paradox
However, they can satisfy some of them.

As I'd mentioned, largest-remainder methods satisfy the quota rule at the expense of the Alabama and population paradoxes.

Highest-averages methods (D'Hondt, Sainte-Laguë, Huntington-Hill) have no Alabama paradox and no population paradox at the expense of violating the quota rule, meaning that they can produce additional disproportion.

It's possible to satisfy the quota rule and have no Alabama paradox, but it's not possible to satisfy the quota rule and have no population paradox.
 
Now for how RRV is related to highest-averages proportional representation.

Imagine strict partisan voting, with the voters divided up into factions that each support only one slate of candidates. I'll simplify the discussion by doing the approval-voting case, SPAV.

Faction i has total vote

T(i) = N(i)*K/(K + W(i)/M)
Faction i, total vote T, N members, W winners already selected from the faction's slate.

To within scaling factors, this is the expression for the average in the highest-averages method. So RRV is proportional in the limit of large slates. This argument also shows that SDV and generalized SDV are also proportional, since (number of winners) >> 1 makes (sum of winner votes) >> (vote for each candidate).

Analysis for small numbers of votes is more difficult.
 
Back
Top Bottom