Readit News logoReadit News
asdajksah2123 · 2 years ago
I'm not sure I understand why AI is supposed to make anything better, at least under the regulatory regime we have right now.

1. AI doesn't do anything new. It's just human thinking done better in the best case scenario. So the plus side is that we may come up with ideas that a human might have come up with 10-15 years later. However, a lack of ideas is not the problem we face at all. We have solutions to nearly every problem we face. We have enough wealth to implement nearly every solution we have. The real issue we face is that the wealth is concentrated in certain hands that do not want it to make it to others' hands, and those who are capable of implementing the solutions we have prioritize maximizing the wealth they capture with those solutions. AI, as long as it remains in private wealth focused hands, will only serve to turbo charge these issues.

2. Under the current regulatory regime the responsibility of what AI does wrong is not clear. Much like how Uber, and the other gig economy companies leveraged the opportunities created by the spread of the smartphone to create a regulatory innovator, whose only real purpose was to dismantle regulations and impose costs on 3rd parties and capture profits for themselves, AI will and is initially again being used as little more than a legal innovation, using the gray area in terms of legal responsibility as an advantage to do things a company would never have a human employee do or to allow companies to use other people's works without paying for it, and not getting sued for the theft.

Expect AI to continue spreading into more legal gray areas as companies take advantage of the lack of clarity on legal responsibility to raid and pillage our economy and societies.

cmiles74 · 2 years ago
Right now, the only products I've seen leveraging LLMs are the tools from Nuance[1] that fill in physician notes. Arguably this is not an improvement; physician notes are already an incomprehensible garble of cut-and-paste content (often from other systems).[2] But at least it is unlikely to negatively impact patient care.

So far, AFAICT, the drive is to cut costs by saving the time the physician spends with the EHR. This has been a goal of the EHR systems for as long as they've existed as it saves money. There is a lot of press releases about products that do more but it's not at all clear to me that these products being widely used.

The UnitedHealthcare example in this article is interesting in that is was being used in production. OTOH, it sounds much more like something evaluating business rules or some heuristics rather than an LLM or even an ML based solution.

[1]: https://blogs.microsoft.com/blog/2023/08/22/microsoft-and-ep...

[2]: https://ehrintelligence.com/news/ehr-clinical-note-length-co...

scrubs · 2 years ago
"So far, AFAICT, the drive is to cut costs by saving the time the physician spends..."

In mental health a lot of providers don't even file on customer's insurance. Patient pays cash/charge and the patient is stuck with filing claims. Providers washed their hands of that dirty work.

During my last job (large firm large metropolitan city) employer subcontracted for covid testing erecting a large tent to handle 8 covid exams at once. Here's the procedure:

- you badge in; subcontractor used our hw/network.

- you them give them your name and DOB, which the PC already told them

- they write your name on specimen collection tube, and send you to bay X. The nurse at bay X sees you coming

- at the bay you are asked for your name and dob again. And you badge in again.

That level of duplication can only arise as a function of billing my then employer for extra non productive labor or fraud cya protection. Either way it's damn annoying.

At my last doc appointment I spent the first 20 mins answering THEIR BILLING questions: was my name xyz? Did my insurance change? Etc. No services have been rendered yet.

And despite all the IT for billing and PC displays displaying common sense health tips, there was no display of how long one would wait for the doc. It was a blind date: maybe they'd show; maybe you had better things to do.

In sum, whatever crappy IT health care has, it's to help them, not customers. God damn: they made the paper work hell they live in and, as I say, mental health care patients are stuck with the dirty work because they can't stand it anymore.

My wife spends 2hrs a week yelling at aetena. She's pretty sure they decline claims out of willful stupidity or nefarious greed. I would not be surprised to hear crappy AI involved.

I absolutely hate that business.

I was solicited multiple times for interviews by health insurance corps. I told them NFW. Their pay sucks. Their concept of quality improvement sucks. They suck.

to11mtm · 2 years ago
> The UnitedHealthcare example in this article is interesting in that is was being used in production. OTOH, it sounds much more like something evaluating business rules or some heuristics rather than an LLM or even an ML based solution.

I'm fairly certain that at least one insurance company VP out there is using the LLM/ML hype to trumpet what is actually a statistical model evaluation. OTOH if it provides cost savings does the business truly care what it is? Especially if they can spin it as 'The computer makes more informed decisions thanks to AI'.

EdwardDiego · 2 years ago
My employer is exploring LLMs in our existing product for transcription purposes in a clinical setting, especially telehealth.

It's pretty damn impressive tbh.

bradknowles · 2 years ago
Can we be sure that those intermediated notes aren't just hallucinating what it thinks the doctor wants to write down?

How do you verify that what gets written down really is what the doctor wanted? How do you automate that to protect yourself against LLM hallucination?

Dead Comment

userinanother · 2 years ago
“ Expect AI to continue spreading into more legal gray areas as companies take advantage of the lack of clarity on legal responsibility to raid and pillage our economy and societies.”

This is exactly what’s happening, it doesn’t even need ai just an algorithm that allows companies to have an excuse for denying care. If there is no human in the loop nobody is responsible. You can’t send an AI to jail so they can just abuse the system.

Judges need to force companies to stop using algorithms and allow approve all expenses when an algorithm is found to be illegal or potentially illegal and that will end the practice quickly. The burden of proof should be on the algorithm writer to show its legal

a_gnostic · 2 years ago
There are no solutions, only tradeoffs. The over-reliance on one size fits all "solutions" are what got us into this mess, and AI will prove just another bandaid with its' own problems, until we realize this.
skohan · 2 years ago
Yes and no - sometimes you need tailored complex care from a professional. Sometimes you need routine treatment, but it's going to take 3 months to get an appointment with the given specialist, or maybe in the case of rural areas there isn't even one within 1000km of you.

A lot of good can be done by extending "one size fits all" solutions for simple things there's just no capacity for

codr7 · 2 years ago
2) is pretty much the definition of a modern startup if you ask me.

Precious few actually make a positive difference.

Obviously that's not the story they'll tell you, or even themselves, but once you start scratching the surface you'll find that they're all about squeezing profits and pushing costs just like everyone else.

sporkland · 2 years ago
> I'm not sure I understand why AI is supposed to make anything better

It's probably because you don't understand how the world works. Most things aren't revolutions they just allow us to good things more often and cheaper and so create wide spread "wealth". In this case imagine a doctor who isn't magically better than all doctors combined, but instead is the best of all doctors and specialists and performs very consistently and is available on 24/7 basis. It didn't revolutionize everything but I imagine the collective impact on health and lifespans will be tremendous.

thesuitonym · 2 years ago
You don't understand because you think we have a society built around making life better for everyone. It's not. Our society currently is optimized for one thing: Making sure the rich stay rich, and the poor stay poor. In that respect, AI is amazing. AI requires a small upfront cost, then can do the job of thousands of people for the pay of less than one. It works 24 hours, and will never ask for more money. And it works exactly to the specification you set, and will never, ever do anything to help your customers.
asdajksah2123 · 2 years ago
I disagree with this. There's a lot of evidence that modern society has made things much better for everyone.

That doesn't mean that there aren't negative influences in modern society. The rich wanting to get richer is definitely one of the major ones that is particularly powerful right now.

But that's why society is a process. You gotta fight back and moderate the negative influences, and boost the more positive ones.

I think the question whether AI will fall on either the positive or negative side of the ledger is wrong in itself. AI will provide a lever to magnify whichever direction society and people were going in anyways independent of AI. It's a tool, and hopefully a really powerful tool, but it's not a God. It can be used equally for good and bad.

And if you believe, like I do (and by all appearances you do too), that our current society is balanced to benefit bad actors over good actors (where bad/good are being defined in terms of benefiting society overall), the magnifying effect of AI is likely to magnify the net negative.

The one advantage of something new like AI coming around is that, I believe, the primary reason open democratic societies allow themselves to become tilted in favor of bad actors is because of complacency...it's usually a slow shift rather than a single big change. The arrival of a major chance like AI forces society and everyone in it to once again take stock of where society stands.

This gives us an opportunity to change the tilt of our society to promote good actors again, before AI truly starts having an impact, which gives us an opportunity to ensure AI magnifies a net benefit, rather than a net negative, for society.

And frankly I think we are closer to having that reckoning than we've ever been in my few decades on this planet, so overall I'm optimistic about AI. But again, whether we benefit or suffer with AI will be independent of AI itself.

infamouscow · 2 years ago
> Making sure the rich stay rich, and the poor stay poor.

It doesn't explain how Sears managed themselves into bankruptcy, handing everything to the likes of Amazon despite having all the money in the world, comparatively.

It also doesn't explain how EVs are taking off. I've seen multiple conspiratorial documentaries about why the oil companies will never allow EVs to ever take hold.

BobaFloutist · 2 years ago
>you think we have a society built around making life better for everyone.

You know, I don't think they do think that?

I think they think that's what society should be, though.

hackerlight · 2 years ago
> AI requires a small upfront cost, then can do the job of thousands of people

That's the same with every efficiency improvement. Efficiency improvements (like AI) aren't zero sum. When production costs go down, competition forces prices to also go down, which then leads to more wealth for everyone not just the capital owners (even if it disproportionately benefits capital owners).

People are made redundant and then find jobs in other parts of the economy where they're actually needed. This is a painful but necessary corrective mechanism for the economy to remain dynamic and productive and innovative.

That said, I still want a global wealth tax.

cscurmudgeon · 2 years ago
Hasn't the poverty rate consistently decreased?

Dead Comment

idopmstuff · 2 years ago
1. AI is broadly accessible. While it definitely has its downsides, I think there's a valid argument to be made that access to AI medical advice is better than access to no medical advice.

2. AI can hold vastly more knowledge in its "head" than a doctor. There are many rare conditions that a doctor rarely sees. That doctor doesn't have enough time to spend researching these kinds of things for their patients, but an AI might be able to analyze their medical records/test results/etc. and come up with possible diagnoses that a regular primary care physician couldn't. If these diagnoses go to the PCP for evaluation (as opposed to going straight to the patient), what is the downside or harm?

malcolmgreaves · 2 years ago
> valid argument to be made that access to AI medical advice is better than access to no medical advice.

This is not true. And it's where I believe your whole response takes a wrong turn.

Why is it wrong? Because of risk of harm.

Giving out wrong information can easily kill people. Or permanently insure them. There's a tremendous amount of harm that can occur if you do things wrong in medicine.

When you attach "an AI said X" to something, it adds the appearance of authority. This makes it incredibly dangerous. Folks that don't know enough will believe that the "advice" is true because it "comes from an AI" (and how could an AI be wrong? It's advanced! It's smart! It _can't be wrong_ because the company is using it instead of a real Doctor, and _they wouldn't want to harm me with something that can be wrong_...right?)

For example, let's take COVID. If you had GPT say "go take hydroxychloroquine for COVID", you'd end up with a lot of people getting harmed by (1) taking a drug that's not going to help them when they _think_ it would (so they're not going to do something else for treatment, because they incorrectly think they're already being treated) and (2) there's always side effects to taking _any drug_, and these side effects may actually cause _more harm_ than doing nothing.

rchaud · 2 years ago
Earlier discussion on this topic, with the original source link from ArsTechnica:

https://news.ycombinator.com/item?id=38299921 (Nov 17, 2023)

As I said on that thread: the dangers of AI will always come down to human decisions made on how it should be applied. These 'solutions' will come from the McKinseys and IBMs of the world to serve hospital CEO needs, not those of patients or care providers.

For those thinking that AI will reduce healthcare costs by improving efficiency, I point you to the 'electronic medical records' hype of 15 years ago. Are per capita healthcare costs any lower now?

snowpid · 2 years ago
" For those thinking that AI will reduce healthcare costs by improving efficiency, I point you to the 'electronic medical records' hype of 15 years ago. Are per capita healthcare costs any lower now? " Can you elaborate? In countries like Denmark or Estonia electronic medical records are well used among doctors and patients.
LegibleCrimson · 2 years ago
I think the point is that one of the big hype phrases is that it will decrease costs of healthcare and lower people's bills, but even if you lower the cost, there is no actual incentive to lower patient bills, so the extra value simply goes to owners and investors rather than the end consumer.
linuxftw · 2 years ago
The idea was that with all this healthcare data, patients would realize better outcomes. Like, your super-Doctor is going to realize that 6 months ago, you went to urgent care of XYZ condition, and that's going to shine some kind of light on your situation now, so he's going to be better prepared to treat you.

The reality is, the vast majority of that data is useless noise, nobody has the time to make any kind of analysis on it, and it's just another healthcare cost center providing zero value to the patient.

cmiles74 · 2 years ago
In the US, the cost of health care has not significantly decreased since the introduction of rules, regulations and government money pushing healthcare organizations to implement and use EHR systems.

I would say all of the intervention by the US federal government has made everything much worse. The EHR vendors (I'm looking at you, Epic) managed to consume the money thrown at the problem and yet have only moderately improved. In my opinion it also pushed smaller hospitals into expensive contracts with EHR vendors, eroding their already tenuous financial health and leading to even more consolidation.

hotnfresh · 2 years ago
Yep. The actual risk of “AI” in the medium term is that these things are gonna mostly be used for optimizing the shit out of extracting money from normal people (i.e. pushing down the standard of living) while all normal folks will get out of it is fancy autocomplete.

Great. Shifting the power and information imbalance farther toward large corporations is exactly what our flavor of capitalism needed. /s

(Mostly, in the legitimate, if you will, economy, that is—they’re also gonna be hugely useful to scammers and astroturfers)

geodel · 2 years ago
> The actual risk of “AI” in the medium term is that these things are gonna mostly be used for optimizing ...

I like this. I will further add since In the long term we are all going to die it will be fine for everyone :-)

rqtwteye · 2 years ago
" These 'solutions' will come from the McKinseys and IBMs of the world to serve hospital CEO needs, not those of patients or care providers."

That's the pattern for most internal enterprise software that's being purchased.

dinvlad · 2 years ago
It seems the Big Data hype never really died down - it just changed the name
tuatoru · 2 years ago
The Economist thinks things are better[1]. In that costs are not rising faster than average inflation any more.

So not lower than before, but lower than was expected for 2023 a few years ago.

For those blocked by the paywall, the article lists causes for the slowing growth as: productivity improvements in paperwork, cheaper technology for dialysis, among other things, the Affordable Care Act, non-US countries insisting on generic drugs more often, and slower-than-inflation growth in median income (the rich hoovering up all the money).

1.https://www.economist.com/finance-and-economics/2023/10/26/h...

miah_ · 2 years ago
Its supercharging the worst in everything its applied to. Try getting a job right now. All of the automated systems that are doing screening will throw you out if you aren't 100% compatible with whatever they're looking for. Sure, its cheaper than human screening, but now you've got a position not filled for even longer, which is probably costing you in other ways.
apwell23 · 2 years ago
> screening will throw you out if you aren't 100% compatible with whatever they're looking for

how does one know if this it the reason they are being thrown out.

poidos · 2 years ago
You can never precisely be sure, but here are some of the behaviors I’ve been observing that give me the feeling I was auto-rejected:

- rejection coming in nearly instantly - rejection coming at “odd” times — weekend evenings, 3 AM in my time zone for a company in my time zone

The latter one doesn’t hold up too well for remote-first companies but I would hope few tech firms have recruiting staff going through applications on weekends.

lordnacho · 2 years ago
You make up a CV that exactly matches the job advert and see whether there's a response
NovemberWhiskey · 2 years ago
As a hiring manager, based on the average candidate that gets forwarded to me after presumably several rounds of filtering (some mechanical, some human), the idea that there is some kind of smart matching going on is slightly ridiculous: they can't tell a security-focused software developer resume from an assurance-focused security specialist, for example.
siva7 · 2 years ago
I guess it hurts less if you can blame ai?
Eumenes · 2 years ago
AI isn't scanning job applications lol ... lazy recruiters can't review them fast enough.
zaptheimpaler · 2 years ago
Alternate headline: "Our Healthcare System continues to deny claims, now with AI". Makes it clearer who the culprit is.. its not AI. They have been denying claims for decades without any help from AI too.
FireBeyond · 2 years ago
Most of these systems just present an RN (and I'm not sure why this hasn't been challenged as practicing medicine without a license, other than the payers' argument probably being "you can still get this intervention/ treatment/ drug, we're just not paying for it") with a claim, and a list of reasons the system deemed that it can be denied. The RN's job is ostensibly to see if there's any one of those bullet points that needs to be vetoed.
tempodox · 2 years ago
Yes, but if “AI” denies you, humans are suddenly no longer responsible.
zaptheimpaler · 2 years ago
The company using the tool is responsible regardless of how the decision is made, same as it is today, for whatever good that does us.
xrd · 2 years ago
Never in a million years could I have imagined there would be something worse than navigating the health care system. Hours of phone calls with deliberately confusing systems, impotent customer service representatives and no choices. It's like the movie Brazil already. Why wouldn't they add AI? Great idea.
friendly_deer · 2 years ago
Hah. There's literally an AI startup designed to automatically navigate confusing phone trees to get information from health systems:

https://outbound.ai

kyleee · 2 years ago
Truth is stranger than fiction
protoman3000 · 2 years ago
> Even when users successfully appealed these AI-generated determinations and win, they’re greeted with follow up AI-dictated rejections just days later, starting the process all over again.

Wait, what? Does that mean they can just take your money and reject your request to cover treatment until you die even if you had every right to do so?

TheCleric · 2 years ago
Unless you have expertise in fighting them, yes.

https://www.propublica.org/article/blue-cross-proton-therapy...

manicennui · 2 years ago
Yes, this is what Americans mean when they parrot bullshit about freedom: the freedom for corporations to do whatever they like in pursuit of profit.
cryptonector · 2 years ago
We don't have a free market in healthcare in the U.S. though. You can be for a free market and against the abomination we currently have. If we just went back to 80%/20%, high-premium-or-high-deductible proper insurance, no HMOs, no PPOs, no co-pays, none of that nonsense, with published pricing, then we'd have a free market.
mcosta · 2 years ago
Isn't the medical field one of the most regulated?
dsaavy · 2 years ago
Yep, and ever since a certain insurance provider has switched to an AI system, I've had nearly all my Type 1 Diabetic prescriptions rejected (that I've had no problem getting for over a decade).

Then it takes hours on the phone for every prescription to finally get one approved. Rinse and repeat every 3 months.

hotnfresh · 2 years ago
We’ve had a flex spending account (HSA wasn’t an option) that we were only putting as much in as we were sure we’d spend, rejecting a very high rate of payments this year, even from doctor’s offices (WTF do you think that’s for?!) and demanding documentation, seemingly with no rhyme or reason. Seems like they’re actively trying to keep us above the roll-over limit so they can steal our money, which isn’t something we’ve experienced with health flex accounts in the past.

Wonder if we’re “beneficiaries” of the AI revolution. Or just an “if” statement triggering off a random number.

sofixa · 2 years ago
This is inhumane.
rqtwteye · 2 years ago
Technically no, but you need to be able to afford paying for lawyers to fight them or spend an enormous amount of time and energy.

For some reason health insurances and hospitals can get away with practices that no other business would get away with. They can commit fraud and when they get caught, all they have to do is to say "oops" and fix the mistake. No other consequences. It's pretty crazy

pixl97 · 2 years ago
musha68k · 2 years ago
Also continuing the trend of unaccountability at least on a personal level.

https://en.m.wikipedia.org/wiki/Skin_in_the_game_(phrase)

manicennui · 2 years ago
It will almost certainly be used make customer service worse as well. In "Ways of Being" James Bridle provides other examples of how corporations are going to use "AI" to exacerbate the problems they already cause in pursuit of profit.