The 3d. Simon Bolivar Seminar on statistical analysis of the recall referendum took place last Thursday with two talks by Jose Huerta and Luis Raul Pericchi on studies that I have discussed here before and a third talk by Carenne Ludeña on looking at the results from a critical point of view. In some sense, I did not learn as much from the talks, since I was aware of the results, but I did learn quite a bit from the progress made by others not only in studies of the results themselves but new avenues that are trying to correlate, for example, those centers that had “anomalous” results, with those that received data during the day on August 15th., but more on that later.
Luis Raul Pericchi et al: Methods to indirectly verify the non-intervention of an election
I have mentioned Pericchi’s work earlier since he is the mathematician that has been applying Benford’s law to the recall vote. Basically, Benford’s law allows for the detection of manipulation of data, in this case the results of the recall vote, by looking at either the first or second digit of the sequence of numbers, the frequency of occurrence of these digits may be able to show that the data may have been tampered with.
What Pericchi did was to look at the first two digits. This is done because the first digit test may not be the most accurate since, for example, it may be bound in a range such as no voting machine having either Si or No votes above 900. (This is an invented example).
The results are similar for the first two digits, but I will convey those for the second digit. What is done, is to compare the expected frequency of the digits and do a statistical test to determine what is the probability of such an occurrence as a simple probability, or to calculate the so called P value, a number which is used to determine whether the data was or not intervened. A P value below 0.05 is considered to be indication that there was intervention of the data. In the case of the second digit I will quote both the P value and the Probability.
Second digit results:
Manual Centers Si votes P value=0.0032 and Prob~5%. This is inconclusive
Automatic Centers Si votes P value=0.02 and Prob~20% Suggests non-intervention
Manual Centers No votes P value~0.15 and Prob~ 44% Suggests non-intervention
Automatic Centers No votes P value~0.000… Prob~0% Indicates intervention
Essentially, the frequency distribution of the No votes in the automated centers was found to be flat, a uniform distribution of digits, not at all what is expected from Benford’s Law and much different from what is found in the No votes in the manual centers.
The same result was also found for the total number of votes at each center, that is, the sum of Si plus No votes, also showed the same pattern indicating the manual centers were ok, but the automated centers were intervened. This result is quite strong and can not be dismissed easily as voting records not only usually follow Benford’s law, but in this case the manual centers are shown to behave correctly, suggesting a very strong case for intervention of the data in automated centers.
Even more interesting, when Benford’s Law was tested on the No votes in the audited machines, the results were quite different with a P value of 0.24 and a probability of 48%, much different than in the overall results, suggesting the sample had something different about it.
For skeptics, I repeat: Similar behavior was found for both the first and second digits in which the No vote numbers and the total numbers indicate intervention since the probabilities of this happening are extremely low. It is going to be extremely difficult to “explain away” this result
Recall also that my pedestrian use of Benford’s law to test the Proyecto Venezuela exit poll matches very well what is expected. While I did not perform any statistical tests, the differences in both Si and No numbers from what is expected do not appear to be significant and the frequency distribution is certainly not flat.
To close, Pericchi also mentioned that he has obtained results similar to Jimenez on coincidences, using different techniques, in a less detailed study so far.
Jose Huerta and Jesus Gonzales: Comparison of the recall vote and other electoral processes
He presented a more detailed version of the work I posted earlier in which he compared the vote from the 1999, 2000 and 2004 votes. Huerta finds that there is more predictive correlation at the municipality level between the 200 and the 2004 vote than between the 1999 and 2000 vote. Huerta, who is a social scientist who studies poverty, concludes that this is very surprising not only from a political point of view, given what has happened in the country in those four years, but also from a social point of view, since poverty, unemployment and crime are up.
Huerta made a couple of comments that I found t be quite interesting and inconsistent with what is known: One, that the growth in the electoral registry is larger in the rural areas than in the urban areas by 18% to 14%, inconsistent with statistical data from the Government and from the fact that the is no evidence of a reversal of the migration trend of the last forty years. But the second comment was perhaps the most surprising: Huerta finds that the largest proportion of changes in the electoral registry were from urban areas to rural areas, which makes no sense whatsoever. His suggestion is that this was done on purpose to have manual centers match the national automated vote.
Carenne Ludeña: A critical view at the models used to study the recall vote.
Ludeña basically tried to point out where bias or assumptions may affect the results leading to a conclusion that may suggest fraud, but the conclusion is model based. The talk had some interesting points and considerations, but I found nothing compelling about it. She pointed out, for example, how the Hausmann and Rigobon model of errors may be flawed by proposing an alternative, but I found the alternative less compelling than the real model. Essentially she said that the Exit polls and the signatures for the recall may have had a correlation factor due to external pressures. However, in my mind these correlations did not exist as the two processes were different.
In the signatures, they were going to be public, which meant that those that wanted to sign did not, for fear of retaliation. In the exit polls the situation is different, whether you are pressured into lying depends in where you are voting, not how. Essentially in a Si-dominated center people may feel pressured to say they voted Si, but the opposite is true if the center is dominated by No voters. If it was true that the No won by 60-40%, then there should not be the correlation that she points put, or should not be important.
Other comments:
1) There are many people working on this problem and are now getting into the details of how the intervention may have been implemented. Perhaps the most interesting comment I heard was about communications between the voting machines and the servers. Essentially, the machines were not supposed to communicate during the day at all and the data was not supposed to be bidirectional in the sense that while handshakes are to be expected, more data should not flow from the servers to the voting machines. This did not happen. The data transmission record exists in detail for all machines and the data is quite interesting:
-Not all machines had communications during the day
-There were two types of ways in which calls were terminated, either by the server or by the voting machine. In one of the two (Don’t remember which) the amount of data transmitted to the machine was larger than from the machine to the server. There appears to be a correlation between this and the “anomalous” centers with funny vote distributions.
This work is still in progress.
2) In the work of Isbelia Martin et al that I reported earlier, a peculiarity was observed that the dispersion of votes by machine size showed two “clouds” if one looks at the Si or No votes, instead of only one in some states. Some have wanted to explain away this behavior by saying it reflects two geographic or social populations. The problem is that the mathematical properties of each “cloud” have inconsistencies, such as the fact that if you do a fit to only one cloud, the intercept is not zero.
The above result could be explained away by artificialities in the data. But what can not be explained away is that the intercept is the same for the Si and No votes. There can be no correlation between the two! If anything should no be correlated is these two populations. There can be no justification for this coincidence state after state where the two clouds are observed!
If this last result is found in a few of the states where the binomial distribution is “chopped up”, in my mind there is no doubt mathematically that the data was intervened. This work is also in progress
3) One last conclusion to me is that the recall vote data shows quite a number of “strange” results. As someone said, the probability of a person winning the lotto is very low, however the fact that a person does win every week is not strange. In the recall vote, mathematical studies show quiet a number of strange results; this is as if the same person wins the lotto week after week. In fact, few of this statistical studies show results for which the data is reasonable or normal and that may represent the biggest abnormality or anomaly.
