Rigobon on Carter Center response: Statistically incorrect

September 19, 2004

As I said in a previous post, I did not want to give my opinion on the Carter Center response to Rigobon and Haussman until I heard from the experts, but I did use the word “silly” to refer some parts of that report, maybe I should have used amateur and Roberto Rigobon from MIT agrees. A reader points out in the comments that Rigobon’s response is in El Universal, which must not have been in the print edition which I read. .


Rigobon’s response centers on two issues:


 


1) The Carter Center said that the correlation between signatures and votes was the same for the votes and the audit.


 


2) The Carter Center said the averages in the audited sample match the averages of the vote.


 


These the are the arguments in each case:


 


1)      It was with respect to this part that I used the word silly, Rigobon seems to agree. He says :”This argument is statistically incorrect because i) The correlation between a variable with itself is one , ii) The correlation between a variable and 10% of itself is also one”


 


Basically, what Rigobon is saying is that the correlation coefficient, which measures how well two things follow each other will be very similar for the signatures compared to the votes or for the signatures compared to only part of the vote. Then, if you removed part of the SI votes when you tampered with the votes, the correlation will be the same or similar and thus the Carter Center has proven absolutely nothing about the problem at hand.


 


2)                 The Carter Center argues that the averages for the sample are similar to those in the audit. Rigobon says that this is statistically incorrect and you can construct a set of results that maintains the averages but in no way reflects the true results.


 


Rigobon gives an example using a Florida election to show how you would maintain averages the same, while tampering with the results. Basically what he says is that in order to have the same averages in both cases, you have to give the same weight to the audit as the changes you made in the vote. Basically, imagine this: Suppose the fraud involved half the machines being tampered with, then the audit would be performed half in the correct machines and half in the ones you tampered with.


 


By the way, the Carter Center says that the averages were the same, however, the average number of voters per machine in the audit was 404, in the election it was 440. I don’t know if this is statistically significant, but they are certainly not the same and did the Carter Center notice this difference?.


 


While Rigobon makes no mention of it, the Carter Center report mentions a study of the random number generator to check that it was indeed random, by making it generate samples of voting machines. To me this was also silly, the random number generator in my Excel spreadsheet would do the same, today and now, but I could have used it (or not!) the day of the selection of ballot boxes to be audited in such a way that it would have picked a certain sequence of boxes or generated an output that was internally replaced (even within Excel!) by a prearranged table.


 


Sumate has criticized that the Carter Center does not identify who did this report. I imagine that the reason is to avoid the problem they have had with people directly contacting its experts to show they are wrong. This has the ¨non-political” consequence that academics like to preserve their academic reputation and can be convinced to change their mind. With this report nobody knows the author, so there is no intellectual integrity or honesty to be compromised other than that of the Carter Center.  Thus, the Carter Center continues to act with superficiality and in this new case, with less transparency than ever.

Leave a comment