I've done a fair amount of election integrity work, both on understanding the statistical analyses and determining the mechanism/criminal networks behind election theft, so I wanted to chime in here.
Statistical methods on their own can't prove election fraud took place, but they are very useful at highlighting red flags. Ever since the 2002 election, people like Richard Charnin have used the disparities between official results and (pre-election or exit) polls to indicate fraud. It's a very appealing idea: polls, when properly conducted, should reflect voters' actual preferences, and serve as a useful check on manipulable electronic vote counts. Of course, polls can be flawed if the sample is biased. And indeed, those who denied that the persistent GOP shifts in the polls pointed to right-wing vote manipulation used that defense, claiming that the polls oversampled Democrats. So the issue of using statistics to detect election fraud is a bit complicated.
Good analysts like Ron Baiman, Kathy Dopp, and Jonathan Simon are aware of this possibility, and do in-depth analyses of polls to rule out sampling bias. From 2004-2006,
US Count Votes was engaged in an intense debate to show that the 2004 exit poll discrepancy (which showed Kerry winning rather than Bush) could not be explained by response bias in favor of Democrats, which they won in my opinion. In 2006,
Jonathan Simon looked at the exit polls (which showed Democrats doing significantly better than the official vote counts) and discovered that, if anything, the polls were actually oversampling Republicans. More recently, in 2014,
Simon published an article on how the likely-voter cutoffs in pre-election polls also systematically underestimate Democratic voters. And
Election Justice USA's report on the 2016 Democratic primaries convincingly ruled out polling error explanations for why the exit polls showed Bernie Sanders doing better than the official counts.
This kind of analysis is the gold standard of election forensics. And Charnin, remarkably, has never done this. He simply calculated the probability of the poll result occurring if the official results were correct, and concluded that since the probability was extremely low, the results were incorrect. Which ignores a key fact that any intro-level statistics class would teach: you can't draw meaningful conclusions if your sample is hopelessly biased. If the poll massively oversampled Democrats, the fact that it shows Dems doing better than the official results means nothing. That's why the work by Simon and the others listed above is so valuable, and the work by Charnin is comparatively quite poor, yet was somehow able to dominate the election integrity movement.
But curiously enough, even though Charnin's work was inferior, he generally always came to the correct conclusions. That changed in late 2016, when he became convinced
before the general election that it would be stolen in Hillary's favor. He began making bad analyses of the pre-election polls to argue that they were oversampling Hillary voters. More specifically, he took Gallup's numbers that 40% of voters nationwide were "independent", ignoring that many "independent" voters actually aren't, and somehow applied this
nationwide percentage to individual state polls (which is totally illegitimate). Then when the
exit polls came along and showed Hillary winning multiple states that she officially lost, Charnin inexplicably broke with all his past work that focused on exit poll analysis and argued that for this one particular election, the exit polls were actually rigged in Hillary's favor.
I looked into Charnin's claims about the exit polls and
quickly found them to be based on a deceptive lie. He claimed that the unadjusted exit polls showed Trump losing among independents, and thus didn't match with reality; but it turns out that Charnin never actually looked at the unadjusted exit polls, which showed Trump winning among independents, but Hillary still winning overall. When I challenged Charnin over this on Facebook, he ignored me the first several times and then eventually blocked me. I recently heard that he's been blocking others who challenge him. So he pretty much exposed himself as a partisan hack who won't let facts get in the way of how he expects an election to be stolen. And as of late, his partisan hackery has gone in the anti-Democratic direction, probably because he felt burned by the 2016 Democratic primaries.
What does that mean for the Roy Moore election? His credibility is torpedoed, but that doesn't automatically invalidate his analysis of the Alabama race; we'd have to look at it. But I find his analysis there questionable as well. First of all, Charnin claims that the exit polls showed Moore was the true winner, which is technically true but highly misleading. I captured the
Alabama Senate exit poll: it indicates Moore getting 49.5% to Jones' 48.5%. Compare that to the official results of 50.0% Jones to 48.3%. There's a disparity, but a very minor one that would almost certainly be within any poll's margin of error. Second, there's the comparison of Moore vs. Jones percentages between straight-party and non-straight party voters. I don't think they sufficiently proved why this is actually anomalous, especially considering that the Moore race was no typical election, and applying assumptions about how voters should act is likely to be especially flawed.
Does Moore deserve a chance to see if the votes were counted accurately? Of course he does, just like any other candidate would. And I don't oppose an election challenge in principle, but I do oppose the idea that he has a legitimate claim to being the true winner. It's also worth noting that Alabama's GOP establishment, for whatever reason,
wanted the right to destroy the ballot images that would have made auditing the election results much easier.