Before a specialist physician can treat a patient, they must collect data about the patient, determine if the referral meets medical necessity, and see if insurance will cover the procedure. This involves analyzing referral documents, coordinating with primary care providers for missing information, and verifying insurance coverage—before they can even see the patient. It’s a manual back-and-forth process that is time-consuming, prone to errors, and slows down patient care.
Cenote mostly automates this workflow. (“Mostly”, because sometimes a human-in-the-loop is needed—more on that below). We use LLMs, OCR, and RPA to extract and validate referral data, check for medical necessity, and initiate insurance verification—all in minutes, not hours. This allows specialists to focus on care, reduce administrative burden, and ensure faster, more reliable insurance payments.
One of us (Kristy) dealt with this after an emergency medical event she had a couple years ago. The time it took her to find a clinic that could receive her medical record and insurance exacerbated her injury. It seemed crazy to have to wait that long for what turned out to be the dumbest of technical reasons. The three of us became friends at a book club, got talking about this, and decided to build software to deal with it.
Cenote automates the back office for medical clinics. When a referral lands in a specialist’s inbox, our software kicks in. We first parse the document through an OCR. After that, we use an LLM to detect the pieces of data that our customer has told us they’re looking for. If we detect the referral is missing data, we send a message back to the referring provider asking for more. Finally, we integrate with our customer’s EHR (Electronic Health Record) via RPA or API and place the document and extracted data in its appropriate location.
The OCR returns confidence intervals. If the LLM reasons over OCR that it is not confident about, we flag this in the UI to the end user and ask a human to review before moving forward.
We entered this task thinking we would have to work on a lot of fine-tuning / ML infra, but the tech needs turn out to be a lot more elementary than that. For example, we have spent a lot more time creating a history-page view of previously submitted files than we have spent training our own data. Many clinics still rely on faxed (!) referrals, and even well-funded practices use obsolete workflows.
While we provide a UI for clinics to upload documents and for human-in-the-loop intervention, our system can also function in a headless manner. By this, we mean that all core functionality—data extraction, EHR integration, and even back-and-forth communication with referring providers—does not explicitly require a UI for user interaction.
In terms of pricing, we charge an annual SaaS fee and a one-time implementation fee. We don’t have one-size-fits-all pricing on our website yet, but we’ll get there eventually.
If you have medical clinic experience, we’d love to hear your thoughts! And everyone’s feedback is welcome. Thanks for reading!
As far as specialists go… when I go to a specialist, they key in my insurance card and have an approval within seconds. Of course with a serious injury I’d be at an ER not sitting around a specialist’s waiting room.
My biggest concern, though, is this will be used to replace back office staff and serious mistakes will get made, patients will be the ones stuck with figuring out insurance nightmares - there won’t be any back office staff left to help, and providers will be given heavier workloads with less assistance. And no, I don’t trust LLMs to make medical decisions.
speaking from 1st hand experience, you are wrong.
> this will be used to replace back office staff and serious mistakes will get made, patients will be the ones stuck with figuring out insurance nightmares
on this you're spot on!
Last few times I’ve been in the ER, the registration guy didn’t come around until we were already in an ER hospital bed and waiting around after being triaged.
There may be really terribly run hospitals who risk lawsuits (or have already been sued for millions) - I would avoid such places.
To minimize risk, we implement safeguards to prevent hallucinations, and our system is built to flag potential missing or unclear information rather than override clinical judgment.
Dead Comment
So yea, teh central question in most systems isn't "Is this patient getting better" it's "Can I bill this visit?"
> This tailored approach
It really is AI slop all the way down now isn't it?
This seems helpful, but what if the flagging system misses an error? Do you measure the accuracy of your various systems on your customer data? These are typically the more challenging aspects of integrating ML in healthcare.
Deleted Comment