Readit News logoReadit News
karel-3d · 6 months ago
I was thinking LLMs can be long-term regressive?

As the "proper solution" here is of course not using PDFs that are hard-to-parse, but force elections to have machine parseable outputs. And LLMs can "fix in place" stupid solutions.

That's not a hate on the author though. I needed to do some PDF parsing for bank statements before; but also; the proper long-term solution is force banks (by law or by public interest) to have parseable statements, not parse it!

Like putting LLMs to understand bad codebase will not fix the bad codebase, but will build on top of it.

oh well c'est la vie

dwillis · 6 months ago
Totally reasonable view, and one of our volunteers actually got the law in Kansas changed to mandate electronic publishing of statewide precinct results in a structured format! But finding legislative champions for this issue isn't easy.
ghghgfdfgh · 6 months ago
I’ve tried using LLM’s to do the same exact thing (turning precinct-level election results into a spreadsheet) and in my experience they worked rather poorly. Less accurate than traditional OCR, and considering how many fixes I had to make, altogether slower than manual entry. The resolution of the page made an outsized difference. It’s nice that you got it to work, but I am skeptical of it as a permanent solution.

Tangentially- I appreciate what OpenElections does- however, I wish there was a similar organization that did not limit themselves to officially certified results. There are already other organizations who collect precinct results post-2016, and using only official results basically limits you to 2008 and afterwards, but historical election results are the real intrigue. Not to mention that I have noticed many blatant errors in election results that have supposedly been “certified” by a state/county government. The precinct results Pennsylvania publishes, for example, are riddled with issues.

model-15-DAV · 6 months ago
I think that we should encourage elections to _not_ be standardized. The problems among various polities in the USA have many different issues and should not be forced to conform to a specific way that elections should be done. This is a social problem and we should not cram it into a technical solution. Legibility of elections should be maintained at the local level, trying to make things legible at a national level is in my opinion unwanted. As much as I would like the data to be clean, people are not so clean. Even if they used slightly more structured formats than PDFs, the differences between polities must be maintained as long as they are different polities.

The way that OpenElections handles this, with 'sources' and 'data' directories I think is a good way to bridge the gap.

missingcolours · 6 months ago
Not being standardized is fine and even a positive (diversity of technology vendors is a security feature and increases confidence in elections). But producing machine readable outputs of some sort, instead of physical paper and PDFs, is clearly a positive as well.
shash · 6 months ago
How is it unwanted to have a standardized database of _results_? They're partly going to be used in a federal context, right?

We do this pretty decently in India - the results of pretty much every election run by the Election commission is updated on https://results.eci.gov.in/# and it's the same for the whole country.

okayishdefaults · 6 months ago
Just breaking down the thought a little, we truly can't say elections shouldn't have standards, right?
IgorPartola · 6 months ago
I have had to do some bank statements to CSV conversions before and still do occasionally and https://tabula.technology/ has been invaluable for this.

In other news, any bank that does not produce a standard CSV file for their bank statements should be fined $1m per day until they do. It's ridiculous that this isn't the first option when you go to download them.

karel-3d · 6 months ago
I did it with one of the Go's PDF parsers (I think rsc has one, and ... some guy... forked it and added some features.. it's still kind of manual but worked great)
Normal_gaussian · 6 months ago
I'm not convinced.

I had Gemini convert a bunch of charity forms yesterday, and the deviation was significant and problematic. Rephrasing questions, inventing new questions, changing the emphasis; it might be performing a lot better for numerical data sets, but it's rare to have one without a meaningful textual component.

timschmidt · 6 months ago
I've seen similar. I wonder if traditional organizational solutions, like those employed by the US Military or IBM, might be applicable. Redundancy is one of their tools for achieving reliability from unreliable parts. Instead of asking a single LLM to perform the task, ask 10 different LLMs to perform the same task 10 different times and count them like votes.
Normal_gaussian · 6 months ago
Yeah, what I did to "solve" my issue was to use several models (4), then where there was any disagreement farm out to humans (2). 60% went to humans in the end.

I suspect if I'd done some corrective transformations before LLM scanning the success rate would have been higher, but the cost threshold of the project didn't warrant it.

latentpot · 6 months ago
Why complicate? One LLM works, another reflects and then a decision engine to review would be cheaper.
nojito · 6 months ago
Not sure I believe this.

I just quickly took a scanned document and the transcription looks good.

https://19january2021snapshot.epa.gov/sites/static/files/201...

https://g.co/gemini/share/d315b4047224

It even got the faded partial date stamp.

Normal_gaussian · 6 months ago
Well bully for you accusing people of lying.

Thats one of the best scanned documents I've seen in years. Most scanning now is via phone.

simonw · 6 months ago
Did you out as much work into it as Derek did? He spent a full hour with Gemini to process the longer document.
7moritz7 · 6 months ago
Use 2.5 Pro on ai studio, not the gemini app
Normal_gaussian · 6 months ago
I did. I was scanning about 400 forms.
dwillis · 6 months ago
That's what I did.
fasthands9 · 6 months ago
In college (about 15 years ago) I worked for a professor who was compiling precint level results for old elections. My job was just to request the info and then do manual data entry. It was abysmally slow.

This application seems very good - but still a bit amazing that lawmakers haven't just required that all data be uploaded via csv! Even if every csv was slightly different format, it would be way easier for everyone (LLM or not).

xp84 · 6 months ago
I could be wildly off-base, but I wonder if some of these systems are airgapped, and the only way the data comes off of the closed system is via printing, to avoid someone inserting a flash drive full of malware in the guise of "copying the CSV file." Obviously there are or should be technical ways to safely extract data in a digital format, but I can see a little value in the provable safety that airgapping gives you.
dwillis · 6 months ago
In some cases that's true, but for many jurisdictions the results systems are third-party vendor platforms, too.
arlort · 6 months ago
You could always just print a QR code as well if that's the issue
simonw · 6 months ago
This is such an excellent example of a responsible and thorough application of vision LLMs to a gnarly data entry problem.
polskibus · 6 months ago
It’s also an excellent example on how lack of forced machine-readable format for gov publishing is a PITA.
Mtinie · 6 months ago
If I was in power and wanted to continue said rule, I’d definitely discourage the adoption of any standardized formatting for election results.

Not, you know, for any nefarious purpose…but because what we’ve used forever was good enough for grandpappy, so it’s obviously good enough for us.

/cough

sitkack · 6 months ago
json to qr code would be a good start. PRIOR ART inb4 a troll.
o11c · 6 months ago
You know, not ignoring the percentage column would mean you can do math checks yourself.
antonkar · 6 months ago
Related: Interesting mockups to turn X/open source Bsky into direct democratic massive "prothetic" polls in each post.

And paid polls that the author claims will replace prediction markets:

https://x.com/MelonUsks/status/1929660387995115713

GardenLetter27 · 6 months ago
Why is the original source data not available anywhere digitally?

Since it's printed it is clearly already in a database somewhere. Why can't that just be made public too.

Seems bizarre to OCR printed documents (although I am aware of many companies doing this to parse invoices, etc.)

simonw · 6 months ago
Welcome to government data.

One key problem is that the US has tens of thousands of local governments, and each of them get to solve problems in their own way.

Digital literacy of the kind that understands why releasing a CSV file is more valuable than a PDF is rare enough that most of them won't have someone with that level of thinking in a decision making role.

codingdave · 6 months ago
> most of them won't have someone with that level of thinking

That is an unfair take on it. Come out to the midwest and talk to some of the clerks in the small townships and counties out here. They do know the value of improved data and tech. And they know that investing in better tech can result in a little less money in the bank, which results in less gas to plow the roads, less money to pay someone to mow the ditches, which means on more car wrecked by hitting a deer. So the question is often not about CSV vs. PDF. It is about overall budget to do all the things that matter to the people of their town. Tech sometime just doesn't make the cut.

Besides, elections tend to have their own tech provided by the county or state, so there is standardization and additional help on such critical processes.

People running the smallest of government entities in this country tend to have pretty good heads on their shoulders. They get voted out pdq when they don't.

nxrabl · 6 months ago
Very interesting! Is this the state of the art for accurate OCR of tabular PDFs, or is there other work in the space to compare against?
SnooSux · 6 months ago
There's lots of posts on HN for developments and companies doing OCR and Document Extraction. It's a classic CV problem but still has come a long way in the past couple years
dwillis · 6 months ago
Yeah, this is a very well-traveled road, but LLMs have made some big improvements. If you asked me (the guy who wrote the original piece linked above) what I'd use if accuracy alone was the goal, probably would be AWS Textract. But accuracy and structure? Gemini.