Part of the goal with releasing the dataset is to highlight how hard PDF parsing can be. Reducto models are SOTA, but they aren't perfect.
We constantly see alternatives show one ideal table to claim they're accurate. Being able to parse some tables is not hard.
What happens when it has merged cells, dense text, rotations, or no gridlines? Will your table outputs be the same when a user uploads a document twice?
Our team is relentlessly focused on solving for the true range of scenarios so our customers don't have to. Excited to share more about our next gen models soon.
Not surprising to see Reducto at the top, it's by far the best option we've tried
This is great, but are there datasets for this already? I know pubtables is like 1M labeled data points. Also how important are table schemas as a % of overall unstructured documents?
I have realworld bank statements that I have been unable to find any PDF/AI extractor that can do a good job on.
(To summarize, the core challenge appears to be recognizing nested columnar layout formats combined with odd line wrapping within those columns.)
Is there anyone I can submit an example few pages to for consideration in some benchmark?