Readit News logoReadit News
throw10920 · 8 months ago
Article is interesting on the whole (I have no experience with "professional" work, and would love for suggestions as to how to be more familiar), but I latched onto this nugget:

> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management. We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably. We plan to do with 100 people what Allianz and others do with 100,000.

Completely separate from the potential ethical issues and economic implications of putting 100k people out of a job, I see one very concrete moral problem:

that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI.

That, to me, is deeply disturbing, and very very difficult to justify.

burningChrome · 8 months ago
>> that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human.

Real world evidence supporting your argument:

United Health Group is currently embroiled in a class action lawsuit pertaining to using AI to auto-deny health care claims and procedures:

The plaintiffs are members who were denied benefit coverage. They claim in the lawsuit that the use of AI to evaluate claims for post-acute care resulted in denials, which in turn led to worsening health for the patients and in some cases resulted in death.

They said the AI program developed by UnitedHealth subsidiary naviHealth, nH Predict, would sometimes supersede physician judgement, and has a 90% error rate, meaning nine of 10 appealed denials were ultimately reversed.

https://www.healthcarefinancenews.com/news/class-action-laws...

thenewwazoo · 8 months ago
> 90% error rate, meaning nine of 10 appealed denials were ultimately reversed.

This is a fantastic illustration of selection bias. It stands to reason that truly-unjustified (some hidden variable) denials would be appealed at a higher rate and therefore the true value is something less than 90%.

That's not to say UHG are without blame, I just thought this was really interesting.

blitzar · 8 months ago
> has a 90% error rate, meaning nine of 10 appealed denials were ultimately reversed.

feature, not bug

working as intended, closing ticket

sergius · 8 months ago
Not to mention the creation of a single point of failure for a critical service...
tbrownaw · 8 months ago
Seems to me that the use of AI is irrelevant[1], and the real problem is the absurd error rate.

[1] In the sense of "it doesn't matter if it caused the problem", rather than "it probably didn't have any effect". Because after all, "to err is human, but to really foul things up takes a computer".

siliconc0w · 8 months ago
AI adjudication of healthcare is fine but there needs to be extremely steep consequences for false negatives and a truly independent board of medical experts to appeal to. If a large panel agrees the denial was wrong, a penalty of 10-100x the cost of procedure would be assessed depending on the consequence of the denial.
vkou · 8 months ago
Or, and here's the wild thing, put all these parasites, leeches, and other useless middle-men out of a job and just go single-payer.
throw10920 · 8 months ago
Yes, I agree. My point was contingent on the current state of affairs - until we can change that, then AI remains a terrible idea.
reassess_blind · 8 months ago
No one is going to accept a claim rejection from AI. Everyone will want to dispute, which will have to go to a human to review. At the end of the day I don’t see how 100 people is realistic.
DiggyJohnson · 8 months ago
I don't think there an ethical responsibility to worrying about your competitor's labor. That would lead to stagnation and it's own sort of ethical issues.
klank · 8 months ago
I don't think it's as easy as hand waving it away as "your competitor's labor". Your competitors labor is your community, it's people. I believe we all have an ethical responsibility to that.

For the points you brought up, why is stagnation for the purposes of upholding an ethical position a bad thing?

And yes, by definition, worrying about ethical responsibility would lead to ethical issues. That's the whole point.

John23832 · 8 months ago
You misread. The poster is speaking about the ethical handling of customer service.
aprilthird2021 · 8 months ago
The whole ugly turn of AI hypemen claiming its somehow morally okay for everyone to lose their jobs all at once makes me think the Luddites were right all along
Y_Y · 8 months ago
Can we imagine a world where the claims are adjudicated by an uninterested party (as far as possible)? I don't want the insurance company to decide a contractual issue, that's ridiculous. At the moment they're kept honest by the law and by public opinion (which varies by country), but the principal-agent problem is too big to ignore.
sokoloff · 8 months ago
Life insurance claims seem fairly unambiguous to adjudicate.
lostlogin · 8 months ago
I agree. And then I recall my last few interactions with insurance companies.

Dealing with a machine is unlikely to be worse.

alabastervlog · 8 months ago
My knee-jerk reaction is to think that the prospect of an insurance company handing support over to machines is a terrible development.

But it was already the case that they just arbitrarily do WTF ever they want, that outside a small set of actions that "bots" can perhaps handle fine they aren't going to do anything for you, and that the only way to get actual support for a real problem involves something being sent from a .gov email address or on frightening letterhead.

So... not really any different? You already basically have to threaten them (well, have someone scarier than you threaten them) to get any real support, this wouldn't be different.

throw10920 · 8 months ago
My last few interactions with an insurance company were moderately annoying but far from terrible - I would absolutely loathe having those replaced by a machine, given the terrible quality of every AI "assistant" I've ever used.
RankingMember · 8 months ago
While a human interaction can be awful, there's a special hellishness that is trying to negotiate with a robot to get something related to your healthcare taken care of.
RealityVoid · 8 months ago
It seems to me apparent that there needs to be some way to arbiter the claims outside the insurer company itself. I'm... not sure that there is. But if there were and there exists some sort of sanction or incentive for the insurer to get it right the first time... I'm confident that AI insurance companies could streamline the process. But you need this incentive mechanism, else it's a recipe for dystopia. (Deeper thought goes that you would shift a lot of work to the arbiter, but I won't touch that for now.)
DrScientist · 8 months ago
This comment is at the heart of many of the challenges tech companies face - they can scale the serving of content - but struggle to scale the content moderation and/or dispute resolution.

It's a common problem with automation - the focus is often on accelerating the 'happy' path, only to realise dealing with the exceptions is where the real challenges lie.

One tried and trust way around that is to cherry pick customers as part of your strategy. You sell insurance to people who will never claim ( and hence dispute), and shun those likely to.

However such market segmentation results in no insurance for people who would need it and the people who don't wondering why they are buying it - ie optimal efficiency for an insurance company is to simply offer no value at all.

ie you could argue the whole value proposition of an insurance company is to pool, not segmented risk, and critically to provide fair arbitration ( protecting the majority of the pool from those that would do insurance fraud, while still paying out ).

Buying 'peace of mind' requires a belief in a fair dealing insurer - that's the key scale challenge - not pricing or sales.

wnc3141 · 8 months ago
A few more things 1) these digital (or AI) first substitutes for traditional firms almost always rely on dark patterns to hit their metrics.

2) Having many firms serve a market is always better for consumers as well instead of a single firm. (with a few notable exceptions)

3) In terms of large scale, its impossible to scale efficiently across countries as you navigate new political and economic structures.

guywithahat · 8 months ago
I don't see it as inherently a problem; AI can (theoretically) be a lot more fair in dealing with claims, and responds a lot sooner.

That said I suspect the founder is seriously overestimating the number of highly intelligent, competent people he can hire, and underestimating how much bureaucratic nonsense comes with insurance, but that's a problem he'll run into later down the road. Sometimes you have to hire three people with mediocre salaries because the sort of highly motivated competent person you want can't be found for the role.

alpha_squared · 8 months ago
> AI can (theoretically) be a lot more fair in dealing with claims

Respectfully, no it can't. From a Western perspective, specifically American, and from an average middle-class person's perspective, specifically American, it only appears to be fair.

However, LLMs are a codification of internet and written content, largely by English speakers for English speakers. There are <400m people in the US and ~8b in the world. The bias tilt is insane. At the margins, weird things happen that you would be otherwise oblivious to unless you yourself come from the margins.

QuercusMax · 8 months ago
You're incredibly naive if you think AI will be used to pay out claims more fairly instead of being used as a deny-bot.
crazygringo · 8 months ago
> that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI. That, to me, is deeply disturbing, and very very difficult to justify.

I don't know. Given the human beings I've interacted with in customer support, and the number of times I've had to escalate because they were quite simply "intelligence-challenged" who couldn't even understand my issues, I'm not sure this is a bad thing.

In my limited experience with AI agents, they've been far more helpful and far faster, they actually seem to understand the issue immediately, and then either give me the solution (i.e. the obscure fact I needed in a support PDF that no regular rep would probably ever have known) or escalate me immediately to the actual right person who can help.

And regular humans will stonewall you anyway, if that's corporate policy. And then you go to the courts.

ben_w · 8 months ago
While I get the vibes, and have had experience of human customer support being very weird on a few occasions, replacing mediocre humans with mediocre AI isn't a win for customers getting actual solutions.

And right now, the LLMs aren't really that smart, they're making up for low intelligence by being superhumanly fast and able to hold a lot of context at once. While this is better than every response being from a randomly selected customer support agent (as I've experienced), and when they don't even bother reading their own previous replies when the randomiser puts the same person in the chain more than once, it's not great.

LLM customer support can seem like a customer win to start with, when the AI is friendlier etc., but either the AI is just being more polite about the fixed corporate policy, or the LLM is making stuff up when it talks to you.

watwut · 8 months ago
I will tell it even more plainly - the plan is to find clever ways how to not pay insurance claims in an automated way.

Deleted Comment

jajko · 8 months ago
Nothing new or revolutionary, just the usual race to the cost bottom with corresponding quality bottom.

Author ignores the fact that any normal market there are variously priced insurances, yet somehow not all people flock to the cheapest one, in contrary (at least where I live). Higher fees mean ie less stressful life when dealing with insurer.

ttoinou · 8 months ago
Couldn't they have contractors doing that ? Be it sub insurance companies or freelancers.

> Completely separate from the potential ethical issues and economic implications of putting 100k people out of a job

Less work is... good ? Ethics are positive here. More work, more pain

throw10920 · 8 months ago
> Couldn't they have contractors doing that ? Be it sub insurance companies or freelancers.

That's a huge assumption that has no supporting evidence.

> Less work is... good ? Ethics are positive here. More work, more pain

No. Work allows people to earn money and survive. Ethics are not obviously positive. Up for debate, but this is not the place.

jjk7 · 8 months ago
The AI: > if (claim) reject()
energy123 · 8 months ago
Ethical issues of putting people out of a job? Please. This mindset has to be called out because it directly causes suffering via creating a societal permission structure for politicians to protect interest groups with protectionist trade policy and internal pork barreling policy.

Economic productivity putting people out of jobs is both good and necessary and it is unethical to work against it.

dehrmann · 8 months ago
My favorite example of complaints about "putting people out of jobs" is that's an argument against self-serve gas in New Jersey (and recently Oregon).
nathan_compton · 8 months ago
I think the commenter was definitely somewhat glib in their statement, but I don't think the case is as clear cut as you think.

The way I've come to think of the current moment in history is that capitalism allocates resources via markets and we use this system because in many situations its highly efficient. But governments allocate resources democratically exactly because we do not always want to allocate resources efficiently with respect to making money.

Whether it "makes sense" or not, most people believe there is more to life than the efficient allocation of resources and thus it might be a reasonable opinion that making 100,000 people suddenly unemployed is bad. I doubt seriously that the OP believes having 100,000 people working indefinitely when the labor can be done more efficiently by machines is good. I think most reasonable people want to see the transition handled more smoothly than a pure market capitalism would do it.

ImPostingOnHN · 8 months ago
> it directly causes suffering via creating a societal permission structure for politicians to protect interest groups with protectionist trade policy and internal pork barreling policy

What part of that is suffering, if it enables 100k constituents to put food on the table?

Dead Comment

xp84 · 8 months ago
> dispute resolution, customer service, etc

There's a huge assumption in your comment -- that having 100,000 employees necessarily guarantees (or even makes likely) that you will have some human to help you.

More likely, those 100,000 humans are mostly working on sales and marketing, and the few allocated to support are all incentivized to avoid you, and to send you canned answers. A reasonably decent AI would be better at customer support than most companies give, since it'll have the same rules and policies to operate with, but will most likely be able to speak and write coherently in the language I speak.

zvitiate · 8 months ago
There's a huge assumption in your comment -- that you know how insurance works. "Most" probably aren't working in sales and marketing; I'd heavily dispute anything above 50% and I feel like 33% might be pushing it? I don't want to get overconfident here, but this claim feels off-base.

Insurance isn't like a widget. People have actual legal rights that insurers must service. This involves processing clerks, adjusters, examiners, underwriters, etc. Which then requires actual humans, because AI with the pinpoint accuracy needed for these legally binding, high-stakes decisions aren't here yet.

E.g., issuing and continuing disability policies: Sifting through medical records, calling and emailing claimants and external doctors, constant follow-ups about their life and status. Sure, automate parts of it, but what happens when your AI:

a. incorrectly approves someone, then you need to kick them off the policy later?

b. incorrectly denies someone initial or continuing coverage?

Both scenarios almost guarantee legal action—multiple appeals, attorneys getting involved—especially when it's a denial of ongoing benefits.

And that's just scratching the surface. I get that many companies are bloated, and nobody loves insurance companies. No doubt, smarter regulations could probably trim headcount. But the idea that you could insure a billion people with just 100, or even 1000 (10x!), employees is just silly.

throw10920 · 8 months ago
> There's a huge assumption in your comment -- that having 100,000 employees necessarily guarantees (or even makes likely) that you will have some human to help you.

That's not an assumption.

I know that I, and many others, have been able to get a human on the phone every time we needed one. Regardless of the number of those humans actually working claims, in the current system, it is "enough".

I also know that it's impossible to give that level of service when you have 1 employee for every 10 million customers.

That's really all that you need in order to make the judgement that you're not going to get a human.

Side-note: I did a quick search, and found that Allstate has 23k reps that actually handle claims and 55k employees total, so almost half of their workforce does claims and disputes. They also have 10% market share of the US's ~340 million people, so that's, at most, 1 rep per 1500 employees. That's much better odds than 1 for every 10 million.

> A reasonably decent A

And there's the problem - that AI doesn't exist. You're speculating about a scenario that simply hasn't been realized in the real world, and every single person that I've talked to who has interacted with an AI-based "support representative" has had a bad experience.

bee_rider · 8 months ago
Apparently 172,824 people die per day,

https://worldpopulationreview.com/countries/deaths-per-day

So actually 100,000 employees put it surprisingly close to just having one case handled per day per employee.

Of course, a ton of people don’t have life insurance. And also, a lot of deaths are pretty straightforward.

Dead Comment

CGMthrowaway · 8 months ago
> the only way to provide dispute resolution and customer service to 1B people is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI.

The Catholic church has 1B "customers" and seems to be doing ok with human-to-human interaction without the need (or desire) for AI. They do so via ~ 500K priests and another 4M lay ministers

nawgz · 8 months ago
This comment is confusing. The Catholic church does not have 100 employees.

Deleted Comment

johnobrien1010 · 8 months ago
Wanted to point to the startup the author seems to be running, which is to sell insurance somehow tied to Bitcoin: https://meanwhile.bm/

For the record, that strikes me as seriously improper. Life insurance is a heavily regulated offering intended to provide security to families. It is the opposite of bitcoin, which is a highly speculative investment asset. Those two things should not be mixed.

Also, the fact that the disclosure seems to limit sales to being only occurring in Bermuda seems intentional. I suspect that this product would be highly illegal in most if not all US states, so they must offer this only for sale in Bermuda to avoid that issue.

verall · 8 months ago
I think it's actually tax avoidance disguised as life insurance:

> You can borrow Bitcoin against your policy, and the borrowed BTC adopts a cost basis at the time of the loan. So if BTC were to 10x after you fund your policy, you could borrow a Bitcoin from Meanwhile at the 10x higher cost basis—meaning you could sell that BTC immediately and not owe any capital gains tax on that 10x of appreciation

tecleandor · 8 months ago
Oooooouch, I missed that when checking their site, and that's extra shady. Especially when at the footing of the page you can see:

  Neither Meanwhile Insurance Bitcoin (Bermuda) Limited nor its affiliates Meanwhile Services (Bermuda) Limited and Meanwhile Incorporated, are lawyers or accountants. They do not provide legal or tax advice. You are wholly responsible for obtaining your own legal and tax advice.
And everything incorporated in Bermuda, and regulated only by Bermuda laws, makes it very impractical as an insurance (go and claim whatever you want against them from your country, I don't think it will be easy) and very obviously a tax evasion thing.

Deleted Comment

n_ary · 8 months ago
Did I read that right? Sam Altman is funding this? If true, I am having some new perspective him.
pinkmuffinere · 8 months ago
I assume from your comment that you haven’t heard of WorldCoin, also funded by Altman.

https://world.org/

Fomite · 8 months ago
"I wanted to derisk my resume by working somewhere with high signaling."

You can take the founder out of a consultancy, but you can't take the consultancy out of the founder.

account-5 · 8 months ago
Without googling I have no clue what that sentence means: "I wanted to derisk my resume by working somewhere with high signaling."
IshKebab · 8 months ago
It means he wanted somewhere impressive on his resume so people trusted him more.
comrade1234 · 8 months ago
My wife made a McKinsey consultant cry… she hired McKinsey for some internal project. One person on the project was a recent Harvard grad. They were in a meeting going over the deliverables along with the McKinsey partner on the project and in the meeting my wife said something to the effect that their work wasn’t up to McKinsey standards.

The junior guy started crying in the meeting. Like just blubbering. My wife still feels bad for it but still…

Weird thing, instead of firing him McKinsey kept him and stipulated that he can only be in meetings when the partner is present.

SJC_Hacker · 8 months ago
I don't care if you went to Ivy League and graduated at the top of your class, I really don't get WTF someone whose life experience has been almost exclusively in school really knows running about business.

Get at least a few years work experience and call me. Or alternatively, start your own dang business if you are really that smart.

guywithahat · 8 months ago
The whole business is nonsensical. The point of a consultant is they have a lot of experience in a specific domain, a recent Harvard grad is useless. From what I've heard, tons of their consultants are young people with minimal real industry experience
crazygringo · 8 months ago
You don't seem to understand how consulting works.

The person making the recommendations isn't just out of school. They've been at the firm for years, and do have a ton of experience.

The recent grads are there for all of the grunt work -- collecting massive amounts of data and synthesizing it. You don't need years of business experience for that, but getting into a top college and writing lots of research papers in college is actually the perfect background for that.

frankfrank13 · 8 months ago
Having worked at Mck, what I could very well imagine happened behind the scenes here was

1. This BA/Asc was on <4 hours of sleep, maybe many days in a row

2. They walked into that meeting thinking they had completed exactly what the client (your wife) wanted

And after the meeting (this I feel more confident about, as it happens a lot)

1. A conversation happened to see if the BA/Asc wanted to stay on the project

2. They said yes, and the leadership decided that the best way to make this person feel safe was to always have a more experienced person in the room to deal with hiccups (in this case, the perception of low quality work)

Isn't that... good? What else would you expect

biker142541 · 8 months ago
Genuinely funny. Had to once interface with a small team from Deloitte on a project, and pushed hard during an early meeting for them to outline the problems and scope. Just complete incompetence... I didn't make anyone cry, but definitely squirm a lot. Just asking questions about their understanding, process to close gap of understanding, and project management plans were enough to make clear to the main executive stakeholder on our end that this was going to be a trainwreck. They were fired shortly after.
chollida1 · 8 months ago
> Weird thing, instead of firing him McKinsey kept him and stipulated that he can only be in meetings when the partner is present.

Why would they fire him after a singe incident?

Sounds like McKinsey is a more companionate organization than you, and that's saying something:)

yodsanklai · 8 months ago
That's what I thought. Having the partner present seems to be the right way of handling this. The company is responsible for employees well-being and shouldn't let a client bullying them.
yodsanklai · 8 months ago
McKinsey employees are people too
watwut · 8 months ago
Are you genuinely surprised they dis not fired him for a single incident?
tecleandor · 8 months ago
McKinsey (or Boston, or any of their peers...) happily recommends firing hundreds or thousands of people for any vague reason each time time are brought for an "optimization process".

Dead Comment

JohnMakin · 8 months ago
This reads like a linkedin post and I'm only commenting because I'd like to hear more about the 2nd type of big-org problems he faced that he felt weren't fixable, and why - but instead got a pitch to his new startup, which I guess should've been expected from the title. Just hoped for more substance.
nforgerit · 8 months ago
Oh McKinsey had a name for that program ("Leap"). I once worked at a "Telco Enterprise Startup" in Berlin founded by them.

They essentially lied about any anticipated KPI potentials and let their "tech" people put together a 15k EUR/month (before public release) platform on AWS which was such a pile of mess, it made the second year's CTO start from scratch. After some heavy arguments because of their poor performance, McKinsey agreed to let some "non-technical" people work there for a couple of months for free. All arguments you'd had with the McKinsey "Engineers" felt like talking to AWS Sales, they had barely any technical insights but a catalog of "pre-made solutions" to choose from.

whistle650 · 8 months ago
Looking at the home page of Meanwhile only made me think of how life insurance is such a different thing than, say, a mortgage. With life insurance, counterparty risk matters. You don't care about your mortgage counterparty. I'm not going to buy life insurance from an insurer with Youtube videos of Anthony Pompliano on their home page. Know your enemy.
dzink · 8 months ago
The engineer in me immediately looks for ways to map out how tax avoidance via crypto trading on life insurance funds via a Bermuda company can possibly go wrong. Insurance has a nice long term cash flow that has proved very sweet for Berkshire Hathaway, and investment on top of that gets perks for the insurance. However Crypto, which has liquidity issues and is heavily scammed/stolen would benefit far more than the users of business. The holdings would stay for decades, allowing arbitrage of the main company with user investments. If there is a leak or a collapse of the crypto, the customers won’t know it until they can’t get their funds back, but since AI is handling the claims, they may never even find out the real reason they can’t get their money back. And since it’s life insurance, the buyer might never find out, while their descendants or loved ones may not know how to deal with it or be plenty confused by the lack of customer service. Very novel scheme.