I’ve been looking into how Applicant Tracking Systems parse CVs and I keep seeing a gap between assumptions and actual behavior.
In tests with different CV versions, text-first, minimal layouts seem to preserve information better than visually structured ones.
I’m curious how much real-world evidence there is behind common ATS advice versus folklore.
For those who’ve worked with hiring systems or done testing: Have you seen measurable differences in parsing accuracy based on layout?
Sophisticated ATSs use CV parsers such as Text Kernel, Rchili, and Dextra.
They don't just parse; they also return structured data from the CV, such as personal information, skills, work history, and dates.
Even for LLMs, I wrote a CV parser that uses Mistral OCR to extract the text and an LLM to structure the data, with great success, even for multilingual CVs.
It’s not a tool and it doesn’t generate CVs — it’s a technical, text-first system explaining what tends to break parsing, what survives reliably, and how to make structural decisions that don’t depend on folklore or guesswork.
Sharing it here in case it’s useful for others looking into ATS behavior:
https://gumroad.com/l/atspasskit
Happy to answer technical questions or clarify assumptions if anyone’s interested.