That's the problem with the current deep learning models, they don't seem to know when they are wrong.
There was so much hype about AlphaGo years ago, which seemed to be very good at reasoning about what's good and what's not, that I thought some form of "AI" is really going come relatively soon. The reality we have these days is that statistical models seem to be running without any constraints, making rules up as they go.
I'm really thankful for the AI-assisted coding, code reviews and many other things that came from that, but the fact is, these really are just assistants that will make very bad mistakes and you need to watch them carefully.
I don't think that's the case, when a model is reasoning, it sometimes starts gaslighting itself and "solving" other problems completely than the one you've shown. Reasoning can help "in general", but very frequently, reasoning also makes it more "nondetermistic". Without reasoning, usually it ends up just writing some code from its training data, but with reasoning, it can end up hallucinating hard. Yesterday, I asked Claude thinking to solve me a problem in c++ and it showed the result in python.
If I just go out of a cave since one year and someone ask me who was on tv last night, I will not invent a name and be convincing that it is the truth. Or invent that you are a famous book author about cooking chicken because it sounds well.
So AI hallucinations are nothing related to human confusion, or honest mistakes.
they gaslight you in "polite" Corporate Voice, you mean. It's one of the things I hate most about conversational agents. I always tell them to stop using the first person and respond in short declarative sentences and to stop pretending to have emotions and it makes it a lot more tolerable.
Fuck polite. It's a machine. Machines can't be polite because they don't have the capacity for empathy. What you are calling polite is a vacuous and flowery waste of expensive tokens in a patronizing tone.
My favorite is when it politely gets it wrong again. And again.
Ah but I (usually) know when I will probably be wrong if I do give an answer, when I know I'm not familiar enough with the subject. Or if I do I will explicitly say this is an educated guess, at best. What I will not do is just spout bullshit with the confidence of an orange-musk-puppet
I took the screenshot of the the bill in their article and ran through the tool at https://va.landing.ai/demo/doc-extraction. The tool doesn't hallucinate any of the value as reported in the article. In fact, the value for Profit/loss for continuing operations is 1654 in their extraction which is the gt, still they've plot a red bbox around it.
good catch on the 1654, will edit that on our blog! try it multiple times, we've noticed esp for tabular data it's fairly nondeterministic. we trialed it over 10 times on many financial CIMs and observed this phenomena.
+1, and worse other than pointing out where it was wrong, there wasn't any clear test criteria, process, side by side comparison, details about either model etc.
A conflict? It's their blog. They can post what they like, including adverts to it.
The news is they appear to be better than this other model. Their methodology might not be trustworthy but deliberately tanking the Ng model wouldn't be smart either.
At Pulse, we put the models to the test with complex financial statements and nested tables – the results were underwhelming to say the least, and suffer from many of the same issues we see when simply dumping documents into GPT or Claude.
It seems like you missed the point. Andrew Ng is not there to give you production grade models. He exists to deliver a proof of concept that needs refinements.
>Here's an idea that could use some polish, but I think as an esteemed AI researcher that it could improve your models. -- Andrew Ng
>OH MY GOSH! IT ISN'T PRODUCTION READY OUT OF THE BOX, LOOK AT HOW DUMB THIS STUFFED SHIRT HAPPENS TO BE!!! -- You
Nobody appreciates a grandstander. You're really treading on thin ice by attacking someone who has given so much to the AI community and asked for so little in return. Andrew Ng clearly does this because he enjoys it. You are here to self-promote and it looks bad on you.
we respect andrew a lot, as we mentioned in our blog! he's an absolute legend in the field, founded google brain, coursera, worked heavily on baidu ai. this is more to inform everyone not to blindly trust new document extraction tools without really giving them challenges!
> That's the standard tier of competence you expect from Ng. Academia is always close but no cigar.
Academics do research. You should not expect an academic paper to be turned into a business or production overnight.
The first neural network, the Mark 1 Perceptron, was invented during WWII for OCR. It took 70 years of non-commercial research to bring us to the very useful multimodal LLMs of today.
And on the other side, there's companies like Theranos, where you think the world will never be the same again, until you actually try the thing they're selling. Full cigar promised, but not even close.
Not saying this is the case with the OP company, but if you're ready to make sweeping generalizations about cigars like that on the basis of a commercial blog selling a product, you might as well invoke some healthy skepticism, and consider how the generalization works on both sides of the spectrum.
The whole corporation-glorifying, academia-bashing gaslighting narrative is getting very tiring lately.
I think there's a valid point about the production-readiness aspect. It's one thing to release a research paper, and another to market something as a service. The expectation levels are just different, and fair to scrutinize accordingly.
we're not the biggest believers in 'agentic' parsing! we definitely do believe there's a specific role for LLMs in the data ingestion pipeline, but this occurs more when bar graphs/charts/figures -> structured markdown.
we're messing around with some agentic zooming around documents internally, will make our findings public!
If you want to try agentic parsing we added support for sonnet-3.7 agentic parse and gemini 2.0 in llamaParse. cloud.llamaindex.ai/parse (select advanced options / parse with agent then a model)
However this come at a high cost in token and latency, but result in way better parse quality. Hopefully with new model this can be improved.
The problem is, you're coming from paper for these PDFs, and this is the step where you add that data.
While the world became much more digitized (for example, for any sale, I get a PDF and an XML version of my receipt, which is great), but not everything is coming from computers and made for humans.
We have hand written notes, printed documents, etc., and OCR has to solve this. On the other hand, desktop OCR applications like Prizmo and latest versions of macOS already have much better output quality when compared to these models. Also there are specialized free applications to extract tables from PDF files (PDF files are bunch of fonts and pixels, they have no information about layout, tables, etc.).
We have these tools, and they work well. Even there's venerable Tessaract, built to OCR scanned papers and have neural network layer for years. Yet, we still try to throw LLMs to everyhting and we cheer like 5 year olds when it does 20% of these systems, and act like this technology doesn't exist, for two decades.
The funny thing is that sometimes we need to machine-read documents produced by humans on machines, but the actual source is almost always machine-readable data.
A lot of times you are OCRing documents from people who do not care about how easy it is for the reader to extract data. A common example is regulatory filings - the goal is to comply with the law, not help people read your data. Or perhaps it's from a source that sells the data or has copyright and doesn't want to make it easy for other people to use in ways besides their intention. etc.
There was so much hype about AlphaGo years ago, which seemed to be very good at reasoning about what's good and what's not, that I thought some form of "AI" is really going come relatively soon. The reality we have these days is that statistical models seem to be running without any constraints, making rules up as they go.
I'm really thankful for the AI-assisted coding, code reviews and many other things that came from that, but the fact is, these really are just assistants that will make very bad mistakes and you need to watch them carefully.
At least an AI will respond politely when you point out its mistakes.
So AI hallucinations are nothing related to human confusion, or honest mistakes.
Fuck polite. It's a machine. Machines can't be polite because they don't have the capacity for empathy. What you are calling polite is a vacuous and flowery waste of expensive tokens in a patronizing tone.
My favorite is when it politely gets it wrong again. And again.
"We ran our OCR offering against competition. We find ours to be better. Sign up today."
It feels like an ad masquerading as a news story.
The news is they appear to be better than this other model. Their methodology might not be trustworthy but deliberately tanking the Ng model wouldn't be smart either.
https://x.com/AndrewYNg/status/1895183929977843970
At Pulse, we put the models to the test with complex financial statements and nested tables – the results were underwhelming to say the least, and suffer from many of the same issues we see when simply dumping documents into GPT or Claude.
>Here's an idea that could use some polish, but I think as an esteemed AI researcher that it could improve your models. -- Andrew Ng
>OH MY GOSH! IT ISN'T PRODUCTION READY OUT OF THE BOX, LOOK AT HOW DUMB THIS STUFFED SHIRT HAPPENS TO BE!!! -- You
Nobody appreciates a grandstander. You're really treading on thin ice by attacking someone who has given so much to the AI community and asked for so little in return. Andrew Ng clearly does this because he enjoys it. You are here to self-promote and it looks bad on you.
It's a product released by a company Ng cofounded. So expecting production-readiness isn't asking for too much in my opinion.
Academics do research. You should not expect an academic paper to be turned into a business or production overnight.
The first neural network, the Mark 1 Perceptron, was invented during WWII for OCR. It took 70 years of non-commercial research to bring us to the very useful multimodal LLMs of today.
Not saying this is the case with the OP company, but if you're ready to make sweeping generalizations about cigars like that on the basis of a commercial blog selling a product, you might as well invoke some healthy skepticism, and consider how the generalization works on both sides of the spectrum.
The whole corporation-glorifying, academia-bashing gaslighting narrative is getting very tiring lately.
we're messing around with some agentic zooming around documents internally, will make our findings public!
However this come at a high cost in token and latency, but result in way better parse quality. Hopefully with new model this can be improved.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
The real solution would be to have machine readable data embedded in those PDFs, and have the table be built around that data.
We could then we actual machine readable financial statements or reports, much like our passports.
While the world became much more digitized (for example, for any sale, I get a PDF and an XML version of my receipt, which is great), but not everything is coming from computers and made for humans.
We have hand written notes, printed documents, etc., and OCR has to solve this. On the other hand, desktop OCR applications like Prizmo and latest versions of macOS already have much better output quality when compared to these models. Also there are specialized free applications to extract tables from PDF files (PDF files are bunch of fonts and pixels, they have no information about layout, tables, etc.).
We have these tools, and they work well. Even there's venerable Tessaract, built to OCR scanned papers and have neural network layer for years. Yet, we still try to throw LLMs to everyhting and we cheer like 5 year olds when it does 20% of these systems, and act like this technology doesn't exist, for two decades.
Agree on the hand-written part.