Readit News logoReadit News
rajvarkala commented on Why outcome-billing makes sense for AI Agents   valmi.io/blog/an-imperati... · Posted by u/rajvarkala
_DeadFred_ · 3 months ago
You sound incredibly short sighted. Yeah slack and making sure people don't just get unwinnable tickets all day is important for retention. And if your company needs more than warm bodies reading a script, yeah, you account for it.

Most machinery you can't run 100% capacity. Most machinery you can't run 24/7. You schedule load. You schedule downtime. And the higher the capacity, the more the machine costs. If you aren't aware of this for your people you are failing at your job.

rajvarkala · 3 months ago
Not sure I follow. But, the first paragraph is interesting.

You are saying, employees stick around if they are given easy tickets, and companies care about passing along easy tickets so warm bodies do not churn.

That will be a big claim.

rajvarkala commented on Why outcome-billing makes sense for AI Agents   valmi.io/blog/an-imperati... · Posted by u/rajvarkala
lbreakjai · 3 months ago
Why would the AI care? The agent builder is still asking a non-deterministic black box with no skin in the game to behave a certain way, they have no guarantees.
rajvarkala · 3 months ago
If the AI is never going to be manageable, never trustable, then the whole idea of agentic systems is dumb.

What is the point of an agent running and you don't trust it?

That would be equivalent to calling this whole AI wave useless. May be it is, maybe it is not.

rajvarkala commented on Why outcome-billing makes sense for AI Agents   valmi.io/blog/an-imperati... · Posted by u/rajvarkala
htrp · 3 months ago
> But in some cases, for instance, my accounting agent would only get paid if he successfully uploads my tax returns.

I think you'd want it to correctly compute your taxes. Especially if you get a letter a year or two after the fact saying you owe the government money

rajvarkala · 3 months ago
Indeed. The whole AI game is predicated on the fact that they can deliver work equivalent to humans in some cases. If that is never going to be the case, then this whole agentic stuff goes belly-up.

The alternative scenario is they get better and do some work really well. That is an interesting territory to focus on.

rajvarkala commented on Why outcome-billing makes sense for AI Agents   valmi.io/blog/an-imperati... · Posted by u/rajvarkala
HelloMcFly · 3 months ago
The comparison doesn't quite hold because AWS is a utility; they aren't an arbiter of quality. Amazon charges for a serverless call regardless of whether your code worked or crashed. You pay for the effort (compute), which is verifiable and binary.

Once you shift to billing for outcomes like "resolutions," the vendor switches from a utility provider to the judge and jury of their own performance. At scale, that creates a "fox guarding the henhouse" dynamic. The friction of auditing those outcomes to ensure they aren't just Goodharted metrics eventually offsets the simplicity the model promises. Frankly, I just cannot and will not trust the judgment of tech companies who evangelize their own LLM outputs.

rajvarkala · 3 months ago
How do you verify AWS charges? By inspecting logs? There goes the arbiter.

I get the binary part. The biggest difference is the subjective component of outcome? However, a tech provider - especially Agent provider - has to bring down the subjective to a quantitative metric when selling. If that cannot be done, I am not sure what we are going to be buying from Agent builders/providers?

rajvarkala commented on Why outcome-billing makes sense for AI Agents   valmi.io/blog/an-imperati... · Posted by u/rajvarkala
jagged-chisel · 3 months ago
Outcome-billing makes absolute sense! In every case where I have used an LLM to work on a software project, I have been frustrated by the process and end up educating the thing myself. The outcome is that it has learned from me, so I need a place to send my consulting bill.
rajvarkala · 3 months ago
:)
rajvarkala commented on Why outcome-billing makes sense for AI Agents   valmi.io/blog/an-imperati... · Posted by u/rajvarkala
_DeadFred_ · 3 months ago
If the AI does all the easy tickets, there's no easing in new hires, so that process is going to be more expensive, so I better get discounted for that hit.

If there is zero slack, and only the hardest parts, this is no longer the job it was before. Salaries will have to go up, or retention will go down. In addition these jobs could already be awful when there was some slack, removing all slack tasks to AI is going to make them miserable so average customer interaction once they get to a human agent is probably going to be worse so your customer satisfaction will take a hit. So I better get discounted with that reputational hit.

It's like the 'have AI pick the tomatoes it can, and the field worker the rest'. Picking the easy tomatoes is factored into the job. Having the ai pick the easy ones could break the whole model. Of having zero slack for the workers could break them and result in no one showing up to jobs where AI has done the easy picking.

rajvarkala · 3 months ago
One reason slack exists is because of capacity and utilization, less slack -> higher wait times in peak times.

Is slack intended for Employee welfare? Come on, we are talking corporate here.

The support services are already regimented - L1, L2 etc. I am not a fan of AI either, but it may be a new reality.

rajvarkala commented on Why outcome-billing makes sense for AI Agents   valmi.io/blog/an-imperati... · Posted by u/rajvarkala
deathanatos · 3 months ago
Already, today, human customer support agents' performance is measured in ticket resolution, and the Goodhart's Law consequences of that are trivial visible to anyone that's ever tried to get a ticket actually resolved, as opposed to simply marked "resolved" in a ticketing system somewhere…
rajvarkala · 3 months ago
We just give today's human performance metrics to AI agents.

AI agent developers internally have a metric they are targeting to improve. That itself violates goodhart law.

rajvarkala commented on Why outcome-billing makes sense for AI Agents   valmi.io/blog/an-imperati... · Posted by u/rajvarkala
HelloMcFly · 3 months ago
At scale? Programmatically? In a way that actually saves time and doesn't create billing conflict (that always happens to benefit the LLM vendor)?

No I do not.

rajvarkala · 3 months ago
Interesting. Let's take the case of infra spend on AWS. Amazon says you invoked serverless calls 100k times and you are charged for it. How are you trusting them?
rajvarkala commented on Why outcome-billing makes sense for AI Agents   valmi.io/blog/an-imperati... · Posted by u/rajvarkala
Neywiny · 3 months ago
I think it gets more nebulous. For example, does he only get paid if the tax returns are accepted by the government? If they aren't, he's still put in the work. This becomes an extremely slippery slope. A better example is probably retail. In the US at least, places like Walmart and Amazon allow for returns, but they usually just throw it out. That's gotta be built into the price. Meaning, the cheapy no returns accepted online stores are cheaper because the cost for the purchaser isn't tied to satisfaction.

Your accountant has to build in margin that you pay for for clients who stiff him on the bill or who he has to take to court to argue he did the service as described in the contract. If you didn't hold that threshold over his head, he would be able to charge less. Would he? Maybe not, I don't know the guy, but he could.

rajvarkala · 3 months ago
Understood. So, a better way is to keep him on a retainer? Or let Amazon or Cheaper store do a cost-plus model?

I think that is the core of the argument. It is the risk-sharing between buyer and seller. If sold on outcomes, seller carries all risk. If sold on work-put-in, buyer carries all risk.

Add to that, in some scenarios, outcomes themselves are fuzzy.

rajvarkala commented on Why outcome-billing makes sense for AI Agents   valmi.io/blog/an-imperati... · Posted by u/rajvarkala
altcognito · 3 months ago
This is an article written by a company/llm trying to justify huge increases to the pricing structure.

Oh! Yknow that thing we were charging you $200 a month for now? We're going to start charging you for the value we provide, and it will now be $5,000 a month.

Meanwhile, the metrics for "value" are completely gamed.

rajvarkala · 3 months ago
The price will be what you are willing to pay. No justification required, excepting for fairness (info asymmetry and what else?). It is written by me. Unfunded bootstrapped !!call it dire straits.

u/rajvarkala

KarmaCake day19October 15, 2020
About
Building Valmi.

Sign up at valmi.io - Get your AI Agents paid.

Valmi Value is outcome-billing and payments infrastructure for AI agents. We handle metering, pricing, billing, and revenue tracking - so you can focus on building great AI products.

https://github.com/valmi-io/value

View Original