As the "proper solution" here is of course not using PDFs that are hard-to-parse, but force elections to have machine parseable outputs. And LLMs can "fix in place" stupid solutions.
That's not a hate on the author though. I needed to do some PDF parsing for bank statements before; but also; the proper long-term solution is force banks (by law or by public interest) to have parseable statements, not parse it!
Like putting LLMs to understand bad codebase will not fix the bad codebase, but will build on top of it.
Totally reasonable view, and one of our volunteers actually got the law in Kansas changed to mandate electronic publishing of statewide precinct results in a structured format! But finding legislative champions for this issue isn't easy.
I’ve tried using LLM’s to do the same exact thing (turning precinct-level election results into a spreadsheet) and in my experience they worked rather poorly. Less accurate than traditional OCR, and considering how many fixes I had to make, altogether slower than manual entry. The resolution of the page made an outsized difference. It’s nice that you got it to work, but I am skeptical of it as a permanent solution.
Tangentially- I appreciate what OpenElections does- however, I wish there was a similar organization that did not limit themselves to officially certified results. There are already other organizations who collect precinct results post-2016, and using only official results basically limits you to 2008 and afterwards, but historical election results are the real intrigue. Not to mention that I have noticed many blatant errors in election results that have supposedly been “certified” by a state/county government. The precinct results Pennsylvania publishes, for example, are riddled with issues.
I think that we should encourage elections to _not_ be standardized. The problems among various polities in the USA have many different issues and should not be forced to conform to a specific way that elections should be done. This is a social problem and we should not cram it into a technical solution. Legibility of elections should be maintained at the local level, trying to make things legible at a national level is in my opinion unwanted. As much as I would like the data to be clean, people are not so clean. Even if they used slightly more structured formats than PDFs, the differences between polities must be maintained as long as they are different polities.
The way that OpenElections handles this, with 'sources' and 'data' directories I think is a good way to bridge the gap.
Not being standardized is fine and even a positive (diversity of technology vendors is a security feature and increases confidence in elections). But producing machine readable outputs of some sort, instead of physical paper and PDFs, is clearly a positive as well.
How is it unwanted to have a standardized database of _results_? They're partly going to be used in a federal context, right?
We do this pretty decently in India - the results of pretty much every election run by the Election commission is updated on https://results.eci.gov.in/# and it's the same for the whole country.
I have had to do some bank statements to CSV conversions before and still do occasionally and https://tabula.technology/ has been invaluable for this.
In other news, any bank that does not produce a standard CSV file for their bank statements should be fined $1m per day until they do. It's ridiculous that this isn't the first option when you go to download them.
I did it with one of the Go's PDF parsers (I think rsc has one, and ... some guy... forked it and added some features.. it's still kind of manual but worked great)
I had Gemini convert a bunch of charity forms yesterday, and the deviation was significant and problematic. Rephrasing questions, inventing new questions, changing the emphasis; it might be performing a lot better for numerical data sets, but it's rare to have one without a meaningful textual component.
I've seen similar. I wonder if traditional organizational solutions, like those employed by the US Military or IBM, might be applicable. Redundancy is one of their tools for achieving reliability from unreliable parts. Instead of asking a single LLM to perform the task, ask 10 different LLMs to perform the same task 10 different times and count them like votes.
Yeah, what I did to "solve" my issue was to use several models (4), then where there was any disagreement farm out to humans (2). 60% went to humans in the end.
I suspect if I'd done some corrective transformations before LLM scanning the success rate would have been higher, but the cost threshold of the project didn't warrant it.
In college (about 15 years ago) I worked for a professor who was compiling precint level results for old elections. My job was just to request the info and then do manual data entry. It was abysmally slow.
This application seems very good - but still a bit amazing that lawmakers haven't just required that all data be uploaded via csv! Even if every csv was slightly different format, it would be way easier for everyone (LLM or not).
I could be wildly off-base, but I wonder if some of these systems are airgapped, and the only way the data comes off of the closed system is via printing, to avoid someone inserting a flash drive full of malware in the guise of "copying the CSV file." Obviously there are or should be technical ways to safely extract data in a digital format, but I can see a little value in the provable safety that airgapping gives you.
One key problem is that the US has tens of thousands of local governments, and each of them get to solve problems in their own way.
Digital literacy of the kind that understands why releasing a CSV file is more valuable than a PDF is rare enough that most of them won't have someone with that level of thinking in a decision making role.
> most of them won't have someone with that level of thinking
That is an unfair take on it. Come out to the midwest and talk to some of the clerks in the small townships and counties out here. They do know the value of improved data and tech. And they know that investing in better tech can result in a little less money in the bank, which results in less gas to plow the roads, less money to pay someone to mow the ditches, which means on more car wrecked by hitting a deer. So the question is often not about CSV vs. PDF. It is about overall budget to do all the things that matter to the people of their town. Tech sometime just doesn't make the cut.
Besides, elections tend to have their own tech provided by the county or state, so there is standardization and additional help on such critical processes.
People running the smallest of government entities in this country tend to have pretty good heads on their shoulders. They get voted out pdq when they don't.
There's lots of posts on HN for developments and companies doing OCR and Document Extraction. It's a classic CV problem but still has come a long way in the past couple years
Yeah, this is a very well-traveled road, but LLMs have made some big improvements. If you asked me (the guy who wrote the original piece linked above) what I'd use if accuracy alone was the goal, probably would be AWS Textract. But accuracy and structure? Gemini.
As the "proper solution" here is of course not using PDFs that are hard-to-parse, but force elections to have machine parseable outputs. And LLMs can "fix in place" stupid solutions.
That's not a hate on the author though. I needed to do some PDF parsing for bank statements before; but also; the proper long-term solution is force banks (by law or by public interest) to have parseable statements, not parse it!
Like putting LLMs to understand bad codebase will not fix the bad codebase, but will build on top of it.
oh well c'est la vie
Tangentially- I appreciate what OpenElections does- however, I wish there was a similar organization that did not limit themselves to officially certified results. There are already other organizations who collect precinct results post-2016, and using only official results basically limits you to 2008 and afterwards, but historical election results are the real intrigue. Not to mention that I have noticed many blatant errors in election results that have supposedly been “certified” by a state/county government. The precinct results Pennsylvania publishes, for example, are riddled with issues.
The way that OpenElections handles this, with 'sources' and 'data' directories I think is a good way to bridge the gap.
We do this pretty decently in India - the results of pretty much every election run by the Election commission is updated on https://results.eci.gov.in/# and it's the same for the whole country.
In other news, any bank that does not produce a standard CSV file for their bank statements should be fined $1m per day until they do. It's ridiculous that this isn't the first option when you go to download them.
I had Gemini convert a bunch of charity forms yesterday, and the deviation was significant and problematic. Rephrasing questions, inventing new questions, changing the emphasis; it might be performing a lot better for numerical data sets, but it's rare to have one without a meaningful textual component.
I suspect if I'd done some corrective transformations before LLM scanning the success rate would have been higher, but the cost threshold of the project didn't warrant it.
I just quickly took a scanned document and the transcription looks good.
https://19january2021snapshot.epa.gov/sites/static/files/201...
https://g.co/gemini/share/d315b4047224
It even got the faded partial date stamp.
Thats one of the best scanned documents I've seen in years. Most scanning now is via phone.
This application seems very good - but still a bit amazing that lawmakers haven't just required that all data be uploaded via csv! Even if every csv was slightly different format, it would be way easier for everyone (LLM or not).
Not, you know, for any nefarious purpose…but because what we’ve used forever was good enough for grandpappy, so it’s obviously good enough for us.
/cough
And paid polls that the author claims will replace prediction markets:
https://x.com/MelonUsks/status/1929660387995115713
Since it's printed it is clearly already in a database somewhere. Why can't that just be made public too.
Seems bizarre to OCR printed documents (although I am aware of many companies doing this to parse invoices, etc.)
One key problem is that the US has tens of thousands of local governments, and each of them get to solve problems in their own way.
Digital literacy of the kind that understands why releasing a CSV file is more valuable than a PDF is rare enough that most of them won't have someone with that level of thinking in a decision making role.
That is an unfair take on it. Come out to the midwest and talk to some of the clerks in the small townships and counties out here. They do know the value of improved data and tech. And they know that investing in better tech can result in a little less money in the bank, which results in less gas to plow the roads, less money to pay someone to mow the ditches, which means on more car wrecked by hitting a deer. So the question is often not about CSV vs. PDF. It is about overall budget to do all the things that matter to the people of their town. Tech sometime just doesn't make the cut.
Besides, elections tend to have their own tech provided by the county or state, so there is standardization and additional help on such critical processes.
People running the smallest of government entities in this country tend to have pretty good heads on their shoulders. They get voted out pdq when they don't.