We're Alex, Martin and Laurent. We previously founded Wit.ai (W14), which we sold to Facebook in 2015. Since 2019, we've been working on Nabla (https://nabla.com), an intelligent assistant for health practitioners.
When GPT-3 was released in 2020, we investigated it's usage in a medical context[0], to mixed results.
Since then we’ve kept exploring opportunities at the intersection of healthcare and AI, and noticed that doctors spend am awful lot of time on medical documentation (writing clinical notes, updating their EHR, etc.).
Today, we're releasing Nabla Copilot, a Chrome extension generating clinical notes from video consultations, to address this problem.
You can try it out, without installation nor sign up, on our demo page: https://nabla.com/copilot-demo/
Here’s how it works under the hood:
- When a doctor starts a video consultation, our Chrome extension auto-starts itself and listens to the active tab as well as the doctor’s microphone.
- We then transcribe the consultation using a fine-tuned version of Whisper. We've trained Whisper with tens of thousands of hours of medical consultation and medical terms recordings, and we have now reached an error rate which is 3× lower than Google's Speech-To-Text.
- Once we have the transcript, we feed it to a heavily trained GPT-3, which generates a clinical note.
- We finally return the clinical note to the doctor through our Chrome extension, the doctor can copy it to their EHR, and send a version to the patient.
This allows doctors to be fully focused on their consultation, and saves them a lot time.
Next, we want to make this work for in-person consultation.
We also want to extract structured data (in the FHIR standard) from the clinical note, and feed it to the doctor’s EHR so that it is automatically added to the patient's record.
Happy to further discuss technical details in comments!
---
This is a task that perhaps can be supported with AI some day, but there are fields that deserve the application of a mature technology, not the gold rush rush game of integrating today's hottest thing.
The summarization part, though, is dangerous. It's a very quick path to us losing all faith in our own medical records. Any way you slice it, no matter how much you train it, it's still going to be vulnerable to hallucination errors that slip by the reviewing doctor and become part of the patient's medical history.
Maybe you should re-evaluate whether that's the right choice based on your own comment, especially when ethics and safety experts are being fired by big tech co.
My doctor could vet it for accuracy, I suppose, but why? He's already putting his notes in my records anyway.
This is different, in that this service requires that a third party pay attention to the content of your discussion in order to work. That's significantly more intrusive.
Maybe its just the doc offices i've been to, but i think this would actually just increase the patient churn in a hospital. There's no way a doc is going to increase their patient time by the 40% saved if the hospital can toss them in front of another patient.
In any case HIPAA will protect you. Your provider will have a business associate contract with them, so they're subject to huge fines if they violate that.
This is a demo. In real life this'll be wrapped with a whole lot of legal contracts.
> in any case, HIPAA will protect you
It protects you insofar as it disincentives honest actors from doing sketchy things with your data. It's punishment for orgs, not protection for patients, like how laws against murder punishes the murderer rather than protecting the victim.
The best thing a patient can do to protect their privacy is to be actively avoid of medical practitioners that, for example, use tools like this which send your private medical consultation transcripts to God-knows-where.
Then you simply cannot use modern healthcare. Unless you're doctor is cash-pay, off the cuff, they're _legally_ using third-parties to perform their services.
----
If your doctor/healthcare provider is a covered entity, they're bound by HIPAA. That means they're free to establish relationships with Business Associates that are part of delivering healthcare. Those Business Associates must also be HIPAA compliant.
Obviously the business model is harder with on-prem, but cloud-first for doctor notes is in the long run much harder.
We plan to offer an on-prem option eventually.
In the meantime: - we offer a GDPR compliant EU-based hosting option - we don't store anything (no audio, no transcript, no note): it's stateless and all erased at the end of each consultation - data is pseudonomyzed as it flows though our systems
Our first customers (large healthcare orgs) have been OK with this so far!
First question - you say that you don't store anything (no audio, no transcript, no note), but your legal agreement says that in order to use this service, the doctor asserts to you that they have gotten consent from the patient for you to reuse all data processed through the service for research and development, and to improve the performance, models, and algorithms of this or any other solution you come up with in the future. Why the difference and how do you square A with B?
"Due to the substantial financial, material and human investments made by NABLA within the framework of the Contract for the development and updating of the Solution, NABLA wish to be allowed to reuse the data processed within the framework of the Contract.
The CLIENT, when applicable in the name and on behalf of the DATA CONTROLLER, warrants that the Data Subjects have been informed of their rights and have given their consent for the use of their data within the framework of the Contract when required by applicable laws or the Regulation and authorizes the DATA PROCESSOR to reuse the Data processed within the framework of the Contract, as long as the latter undertakes to comply with the Regulation for all of this Data, for the uses listed below:
- research and development of the Solution,
- improving the performance, models and algorithms developed and trained by NABLA in the context of the Solution or any other solution published by NABLA,"
Second question - you say that you don't store anything (no audio, no transcript, no note) and that it is all erased at the end of each consultation. But do you store any artifacts derived from the audio, transcript, or notes of a consultation, like data processed directly or indirectly from the audio, transcripts, or notes of a consultation that is fed in to an AI model, ML model, or other dataset that you persist after the consultation?
I fear that LLMs will kick this to a whole new level :(
How are 2 party consent states handled?
Is this HIPPA compliant?
Secure and HIPAA-eligible
Digging deeper (https://www.nabla.com/blog/privacy-security/):This data processing is done on Nabla's servers, which are powered by the HIPAA and GDPR compliant Google Cloud Platform (GCP), and on HIPAA-eligible LLM servers.
The blog posts also mention French trained ML.
> Cedille is a new open source French language model created by Coteries. It is trained to understand and write French and is also the largest model of its kind for French. Cedille is trained using large databases of publicly available content on the internet filtered for toxic content.
Expanding into the US, yes - they would need to deal with HIPAA, but until they do they likely don't need to.
https://www.scribeamerica.com/what-is-a-medical-scribe/
> A Medical Scribe is a revolutionary concept in modern medicine. Traditionally, a physician's job has been focusing solely on direct patient contact and care. However, the advent of the Electronic Health Record (EHR) created an overload of documentation and clerical responsibilities that slows physicians down and pulls them away from actual patient care. To relieve the documentation overload, physicians across the country are turning to Medical Scribe services.
> A Medical Scribe is essentially a personal assistant to the physician; performing documentation in the EHR, gathering information for the patient's visit, and partnering with the physician to deliver the pinnacle of efficient patient care.
https://www.hhs.gov/hipaa/for-professionals/covered-entities...
> If a covered entity engages a business associate to help it carry out its health care activities and functions, the covered entity must have a written business associate contract or other arrangement with the business associate that establishes specifically what the business associate has been engaged to do and requires the business associate to comply with the Rules’ requirements to protect the privacy and security of protected health information.
"If you handle, store or transmit protected health information (PHI) to or from a covered entity then you need to be HIPAA compliant."
Source: https://github.com/truevault/hipaa-compliance-developers-gui...
When a covered entity (a HIPAA-required provider) does business with a private non-covered entity _and_ that transaction involves HIPAA controlled information, they must enter into a Business Associate Agreement (BAA). This effectively forces the private entity to maintain the same HIPAA standard as the provider.
A private company is absolutely free to build non-HIPAA compliant software, but they completely unlikely to get any healthcare providers to actually use it.
Nothing like “subtly wrong” clinical notes to affect positive medical outcomes. And time pressed medical professionals are certainly well suited to vigilance tasks like correcting machine generated errors. /s.
I sincerely hope this never sees the light of day.
Doctor's handwriting is notoriously bad. Now try putting that into an EMR. Good luck with that! You think you're going to get someone who gets paid $2000/hr to type shit up? Yeah, guess again.
I used to type, now I dictate.
Quality of note is provider dependent though. If an LLM can improve quality of notes across the board through simple summary that’s interesting. But the input matters - is the provider actually asking the right questions? Did they look at past relevant history and other notes and put it in their current note?
The LLM should be in a HIPAA compliant environment and have access to the EMR. Then it provides a tidy summary for that. Second it should have relevant questions for the provider to ask during the interview but also a real time/dynamic component which generates relevant follow up questions dependent on patient responses.
Lastly it should put together the old and the new and generate a new note which can then be edited by the provider. Editing notes is much easier than generation of a new high quality note.
Doctor's already today spent too little time with their patients to understand diseases holistically enough.
Adding technically between these 2 will make treatments in most of the cases worse, not better.
Going from a recorded transcript to a summary note, extracted structured data, and diagnostic codes is a big time saver.
You seem quite confident about this, but based on the doctors I know writing notes is a real pain and mostly seen as (important) scutwork. They're only given a set amount of time to see a patient (including notes), and if you can reduce notes they'd actually get more time talking and seeing the patient.
You're not thinking like a hospital administrator. Sounds like these doctors can handle 40% more patients to me!
Everyone hates notes. However, there are already a bunch of dictation services that help write notes more efficiently. They absolutely work well enough.
Adoption and funding is the primary blocker. When my wife works at facilities that have M-Modal, she's 80%+ faster. However, she works at many, who despite knowing the benefits, simply don't care to spend the time, money, or resources on implementing scribe/dictation services.
But this isn't even about the doctor asking GPT, this is transcribing the video call so the journaling becomes easier and the doctor can spend more time with their patients. A good thing, no?
Pharmacists in other countries do a lot of the basic shit doctors in the USA do, with less overhead, for cheaper and with a much faster turnaround. Antibiotics, blood pressure/cholesterol meds, antidepressants and other rubber stamp drugs absolutely do not need to be strongly controlled.
He has medical knowledge which allows him to sort through information.
Any developer who doesn't google today is hindering his ability to develop and slowing the development process. Just like the dev cannot hold all the algorithms and language caveats/syntax in his head, a physician cannot hold all the information in his brain. He may not be familiar with symptoms or might be googling to see if there is anything "new" as pertaining to a patients specific issue.
There was a time steroid injections were the go to for tendon injuries, recently science came out that it results in worse long term outcomes and PT should be the first line treatment.
Would you be upset if someone were to ask you to help with their code and you needed to Google for documentation?
It's not exactly the same thing, but it's a similar situation.
I do tend to agree with you on the rest of your concerns, especially info being used that the patient has no way to verify - but that seems tangential at worst to the product in it's current state.
It's pretty close to the same thing. Programmers are paid for knowing how to program, not for memorizing APIs, system calls, etc. Being able to look up details is essential because it lightens the cognitive load, allowing deeper thought to be put into the real meat of the problem. No programmer can realistically memorize every technical detail they'll need to know.
It's the same with doctors. They're paid because they know the process of diagnosing illnesses, not for memorizing Gray's Anatomy and every pharmacopoeia. Being able to look stuff up is essential for the same reason it is for programmers.
Not to mention that medicine changes, and you want your doctor to have the latest information about whatever it is that ails you.
You clearly have a personal bad experience with your doctor - but that doesn’t mean that this potential product is a terrible idea.
I shouldn't have to pay for your time so you can educate yourself on the thing you purported to know before I hired you. Especially when the hourly rates are in the hundreds of dollars.
So there is some balance there. And doctors are using software that spits them out shit in real-time and then they send you on your way with some printed info you can readily find on the web.
I just had a knee issue and the doctor thought it may be a torn meniscus. Didn't have me do any range of motion movements. We just chatted and he said I think it is this. Take an xray. Bye.
I go get an xray and results come back the next day over the app. All he says in the app - "xray is normal".
So I'm thinking what the hell. That's where you leave it? So what. Should we do an MRI? What's next. You gave me an anti-inflammatory and your guess was apparently wrong. What a shitty engineer this person would be.
Tech can help in some cases. I like getting lab results back in the app. I like that I can follow up with chat later.
But I fear it's making doctors less effective. And the best ones will be the ones that maintain their traditional craft and nuance even with all the fancy new tools and tech that save them time.
Just like everyone is a React developer these days. Tech and advances can make many people sloppy. And dumber. It can push away great talent due to mandates of the tech or process. And attract new talent that is worse.
Deleted Comment
No one is obligated to do free Googling.
Or maybe you didn’t go the for the Googling?
I just like to point out the small advantages of living in Europe now and again