Readit News logoReadit News
jcalx · 5 months ago
Reminds me of this article from two years ago [0] and my HN comment on it. Yet another AI startup on the general trajectory of:

1) Someone runs into an interesting problem that can potentially be solved with ML/AI. They try to solve it for themselves.

2) "Hey! The model is kind of working. It's useful enough that I bet other people would pay for it."

3) They launch a paid API, SaaS startup, etc. and get a few paying customers.

4) Turns out their ML/AI method doesn't generalize so well. Reputation is everything at this level, so they hire some human workers to catch and fix the edge cases that end up badly. They tell themselves that they can also use it to train and improve the model.

5) Uh-oh, the model is underperforming, and the human worker pipeline is now some significant part of the full workflow.

6) Then someone writes an article about them using cheap human labor.

[0] https://news.ycombinator.com/item?id=37405450

palmotea · 5 months ago
> 5) Uh-oh, the model is underperforming, and the human worker pipeline is now some significant part of the full workflow.

AI stands for "Actually, Indians."

shantara · 5 months ago
I’ve been reading the article about the failure of new Siri and this quote stuck with me:

>Apple's AI/ML group has been dubbed "AIMLess" internally

The article: https://www.macrumors.com/2025/04/10/chaos-behind-siri-revea...

K0balt · 5 months ago
This has been a running joke in several projects I have been involved in, each time, apparently independently evolved. I never bring it up, but I am amused each time it appears out of the zeitgeist. It’s actually the best Kind of ironic humor, the kind that exposes a truth and a lie at the same time, with just enough political incorrectness to get traction.

I can’t even count the number of of times I have shut down “AI” projects where the actual plan was to use a labor pool to simulate AI, in order to create the training data to replace the humans with AI. Don’t get me wrong, it’s not a terrible idea for some cases, but you can’t just come straight out of the gate with fraud. Well, I mean, you could. But. Maybe you shouldn’t.

morksinaanab · 5 months ago
I always thought it stood for Almost Implemented
jansan · 5 months ago
Or it should be changed to MT -> Mechanical Turk

"Our bleeding edge AI/MT app..." does not sound bad at all.

amy214 · 5 months ago
worth mentioning amazons amazing high tech "put everything in your cart and just walk out" ((( https://www.businessinsider.com/amazons-just-walk-out-actual... )))

they 100% use this AI "Actually Indians" technology

coupdejarnac · 5 months ago
Destiny fan?
genewitch · 5 months ago
Anonymous

Dead Comment

Dead Comment

midnightblue · 5 months ago
You win the internets, sir.
b3lvedere · 5 months ago
-- apologies --
jjmarr · 5 months ago
> Reputation is everything at this level, so they hire some human workers to catch and fix the edge cases that end up badly.

The most important part of your reputation is admitting fault. Sometimes your product isn't perfect. Lying to your investors about automation rates is far worse for your reputation than just taking the L.

siva7 · 5 months ago
Literally every founder story disproves your theory
chii · 5 months ago
The expectation is that the startup lies until they make it. It isn't too dissimilar to theranos.
Digory · 5 months ago
I'd think ambiguous statements about the scope of your AI would make it hard to prove fraud, if you were being careful at all. "Involving AI" could mean 1% AI.

So it's doubly surprising to me the government chose (criminal) wire fraud, not (civil) securities fraud, which would have a lower burden of proof.

Government lawyers almost never try to make their job harder than it has to be.

tbrownaw · 5 months ago
If you click through to the doj press release, they're saying the statements were pretty explicit.
A4ET8a8uTh0_v2 · 5 months ago
To be perfectly honest, I am more amazed that it was a valid business model and people were willing not just invest in it, but offer their rather personal information to an unaffiliated third party.
claiir · 5 months ago
In this case it's a little bit worse; the "nate" app had a literally "0% automation rate," despite representations to investors of an "AI" automation rate of "93-97%" powered by "LSTMs, NLP, and RL." No ML model ever existed! [1]

See:

> As SANIGER knew, at the time nate was claiming to use AI to automate online purchases, the app’s actual automation rate was effectively 0%. SANIGER concealed that reality from investors and most nate employees: he told employees to keep nate’s automation rate secret; he restricted access to nate’s “automation rate dashboard,” which displayed automation metrics; and he provided false explanations for his secrecy, such as the automation data was a “trade secret.”

> SANIGER claimed that nate's "deep learning models" were "custom built" and use a "mix of long short-term memory, natural language processing, and reinforcement learning."

> When, on the eve of making an investment, an employee of Investment Firm-1 asked SANIGER about nate's automation rate, that is, the percentage of transactions successfully completed with nate's AI technology, SANIGER claimed that internal testing showed that "success ranges from 93% to 97%."

(from [1])

[1]: https://www.justice.gov/usao-sdny/media/1396131/dl?inline

mvkel · 4 months ago
> Turns out their ML/AI method doesn't generalize so well.

I'd argue the opposite. AI typically generalizes very well. What it can't do well is specifics. It can't do the same thing over and over and follow every detail.

That's what's surprised me about so many of these startups. They're looking at it from the bottom-up, something ai is uniquely bad at.

mvdtnz · 5 months ago
I think you're being excessively generous. According to the linked article,

> But despite Nate acquiring some AI technology and hiring data scientists, its app’s actual automation rate was effectively 0%, the DOJ claims.

Sometimes people are just dishonest. And when those people use their dishonestly to fleece real people, they belong in prison.

ohgr · 5 months ago
This is what we did internally. Someone said we could use LLMs for helping engineering teams solve production issues. Turned out it was just a useless tar pit. End game is we outsourced it.

Neither of these solved the problem that our stack is a pile of cat shit and needs some maintenance from people who know what the hell they are doing. It’s not solving a problem. It’s adding another layer of cat shit.

Deleted Comment

Lerc · 5 months ago
Going back earlier, a similar thing in 2017 was done.

https://thespinoff.co.nz/the-best-of/06-03-2018/the-mystery-...

Interestingly this was a task that could probably be done well enough by AI now.

Not that these guys knew how close to reality they turned out to be. I assume they just had no idea of the problem they were attempting and assumed that it was at the geotaging a photo end of the scale when it was at the 'is it a bird' end.

Maybe I'm being overly optimistic in assuming people who do this are honestly attempting to solve the problem and fudging it to buy time. In general they seem more deluded about their abilities than planning a con from start to finish.

aucisson_masque · 5 months ago
> its app’s actual automation rate was effectively 0%, the DOJ claims.

In that case, I believe it's a scam. 0% isn't some edge case.

baxtr · 5 months ago
tbh I don’t think any one except for investors care how you deliver a service as long as quality and price are right.
petesergeant · 5 months ago
Honestly I think the only real problem here is if you then raise further money claiming you've solved the problem when you haven't, which is also where this particular startup comes unstuck
belter · 5 months ago
> Uh-oh, the model is underperforming, and the human worker pipeline is now some significant part of the full workflow.

Tesla robots and Taxis enter the room...

dale_huevo · 5 months ago
I've been flagged as a potential shoplifter by the self-checkout at the grocery store based on some video analysis of CCTV footage of my hand motions. (It was wrong, of course.) After leaving the store I wondered if it really was software analysis or just some guy in India or the Philippines watching a live feed of me scanning bananas.
Joel_Mckay · 5 months ago
It is likely a real machine vision system if it was the same system our former company evaluated.

It worked by camera tracking the shelves contents, and would adjust the inventory level for a specific customers actions. And finally, tracked the incremental mass change during the checkout process to cross reference label swap scams etc.

Thus, people get flagged if their appearance changes while in the store, mass of goods is inconsistent with scanned labels, or the cameras don't see the inventory re-stocked.

You would be surprised how much irrational effort some board members put into the self-checkout systems. Personally, I found the whole project incredibly boring.... so found a more entertaining project elsewhere... =3

imroot · 5 months ago
Percepta was a company that was doing a lot of CV/ML in this space looking for shoplifting traits. They had a few paying customers before they were completely acquired by ADT Business. A lot of shoplifters use the PLU for bananas when tag swapping higher-ticket items at the self checkout, so, more than likely, they wanted to check that you were actually purchasing bananas.
chneu · 5 months ago
For a while a lot of grocery stores were randomly auditing self checkout. I haven't had it happen to me in a couple years though.

It always seemed to be random and coincided with Kroger doing the "scan as you shop" trial thing.

shmel · 5 months ago
What is PLU?
bitwize · 5 months ago
At the Circle K they have the option of doing self checkout by putting all your items under a camera and the register will automagically count 'em up and assess your total. I keep wondering if it's done by AI -- All Indians. Same with the OCR ATMs do on cheques.
tczMUFlmoNk · 5 months ago
Relevant: Uniqlo's self checkout, based on RFID tags with a great user experience:

- https://news.ycombinator.com/item?id=38715111

- https://www.wsj.com/business/retail/uniqlo-self-checkout-rfi...

- https://archive.is/ms1ke

evbogue · 5 months ago
This vibes with my multiyear theory that Tesla self-driving is someone in China driving your car for you like a racing simulator. Perhaps the graphics are even game-ified so the work stays mysterious.
buu700 · 5 months ago
I've been flagged as a potential shoplifter by the self-checkout at the grocery store based on some video analysis of CCTV footage of my hand motions.

Shopping in 2025 must be a frustrating experience for magicians.

baxtr · 5 months ago
Sorry to hear.

Why would it matter to you if it’s a real human or AI? Wrong in any case.

k-i-r-t-h-i · 5 months ago
I was wondering why there wasn't a DOJ concern when Amazon Go did the same thing:

> Amazon Go: Early on, Amazon was clear that it was testing “Just Walk Out” tech — and it was known (at least in tech circles) that they had humans reviewing edge cases through video feeds. Some even joked about the “humans behind the AI.” > Their core claim was that eventually the tech would get better, and the human backup was mostly for training data and quality assurance. > They didn’t say, “this is 100% AI with zero human help right now.”

> Nate: Claimed it was already fully automated. > Their CEO explicitly said the AI was doing all the work — “without human intervention” — and only used contractors for rare edge cases. > According to the DOJ, the truth was: humans were doing everything, and AI was just a branding tool. > Investors were told it was a software platform, when it was really a BPO in disguise.

hobobaggins · 5 months ago
Amazon didn't raise money from credulous investors. Alphabet's Waymo was also having humans take over for some of the driving as well.

And everyone knows that ChatGPT Pro is exclusively powered by capuchin monkeys.

AlotOfReading · 5 months ago
There are some pretty major differences between what Waymo does and what a remote driving service (like the Vegas deployment by Vay mentioned upthread). Imagine that the car has a remote connection to a human while driving and the human misses that another vehicle is about to hit T-bone the taxi. Whose responsibility is it to stop?

With Waymo vehicles, it's the car's responsibility to sense the issue and brake, so we say that the car is driving and the human is a "remote assistant". With Vay, it's the human's responsibility because they are the driver.

This ends up having a lot of meaningful distinctions across the stack, even if it seems like a superficial distinction at first.

bluesnews · 5 months ago
It is a public company, so someone could be investing on the basis of that technology
smegma2 · 5 months ago
> Alphabet's Waymo was also having humans take over for some of the driving as well.

Not sure if this used to be the case but today Waymos can’t be controlled remotely by humans, only ‘guided’: https://www.govtech.com/transportation/waymo-robotaxis-getti... (ctrl+f “cannot be controlled”)

konfusinomicon · 5 months ago
i continuously asked for an optimized database schema several times and all i keep getting is these damn shakespeare sonnets. starting to wonder if they are on to something...
kylecazar · 5 months ago
I had no idea. There was an Amazon Go right in my workplace in 2019 (Brookfield Place) and I got lunches there almost daily. I loved it -- felt like magic, and it was crazy fast. I guess it was just an illusion (as all magic is).
bombcar · 5 months ago
There was something similar run by a German university near the hotel I was staying at. As an American I had to use the cashier like normal but they had signs about how the Amazon-Go like process the students were experimenting with would work, including picture and descriptions on how to help it not be confused.
Dylan16807 · 5 months ago
> I was wondering why there wasn't a DOJ concern when Amazon Go did the same thing:

"Mostly AI, but they failed at getting close enough to 100%" and "effectively 0% AI" are not the same thing.

sschueller · 5 months ago
Elon has also made a lot of claims over the years. Where is FSD or whatever they call it now? The whole solar roof tiles presentation was a lie at the time. P2P Starship travel is impossible but is being "sold" to the public as possible and many other things.
cratermoon · 5 months ago
Exactly. In this case it's pretty clear how Nate was defrauding investors with the claims. Amazon Go made fraudulent claims, but not only had the legal savvy to hedge those claims, they didn't directly raise fund from investors based on those claims.

IANAL, of course.

gamblor956 · 5 months ago
AI standards for "actually Indians."

It's the same tech used at Intuit Dome for the food stalls.

nyarlathotep_ · 5 months ago
Yeah, how quickly we forget.

Dead Comment

bashtoni · 5 months ago
Sadly, I think we all know the answer - because laws don't apply to large corporations or wealthy, powerful individuals in the same way they apply to the rest of us.
themanmaran · 5 months ago
I'm curious when it crossed the line into "fraud" here. Since almost every "AI" application has tons of human fallback. Waymo has human drivers that can teleoperate the vehicle when it gets stuck. The Amazon Go stores were really powered by teams in India [0]. And companies have been pitching "powered by AI" for a decade.

Perhaps this came up because investors finally got a peak at margins and saw there was a giant off shore line item. Otherwise it seems like an "automation rate" is a really ambiguous number for investors to track.

> This type of deception not only victimizes innocent investors

Also this was a funny line

[0] https://www.businessinsider.com/amazons-just-walk-out-actual...

phire · 5 months ago
It’s fraud when they lie to investors, or allow them to assume the wrong thing.

Doesn’t matter what consumers believe, it’s more or less legal to lie to consumers about how a product works, as long as investors know how the sausage is made. (Though, in reality it’s near impossible to lie to customers without also misleading investors, especially for publicly listed companies)

In this case, investors were under the impression that the AI worked, completing 99% of transactions without any human intervention. In reality, it was essentially 0%

rtkwe · 5 months ago
When you claim "without human intervention... except for edge cases" and the truth is it's all "edge cases" ie 0% AI.

> Saniger raised millions in venture funding by claiming that Nate was able to transact online “without human intervention,” except for edge cases where the AI failed to complete a transaction. But despite Nate acquiring some AI technology and hiring data scientists, its app’s actual automation rate was effectively 0%, the DOJ claims.

dspillett · 5 months ago
> I'm curious when it crossed the line into "fraud" here.

Fraud is often defined as gaining something (or depriving someone else from something, or both) via false pretences. Here the something is money (this is most commonly the case) and the gaining/depriving is gaining money and depriving investors of it. It is more complicated than that, with many things that fit this simple description not legally being considered fraud (though perhaps being considered another crime), and can vary a fair bit between legal jurisdictions.

A cynical thought is that the key line being crossed here is that the victims are well-off investors, if you or I were conned similarly the law might give less of a stuff because we can't afford the legal team that these investors have. This is why cases like this one are successful, but companies feel safe conning their customers (i.e. selling an “unlimited” service that has, or developers five minutes after signing up, significant limits). Most investors wouldn't agree to the forced arbitration clauses and other crap that we routinely agree to by not reading and subsequently not accepting the Ts & Cs, etc, and anyway can afford large, capable, legal resources where our only hope would be a class-action from which only the lawyers really benefit.

Another cynical thought is that the line crosses was the act of not being successful. I'm sure the investors wouldn't have cared about the fraud if the returns had been very good.

hahla · 5 months ago
Crossing the line into fraud is how you pitch it.
thatguy0900 · 5 months ago
I would imagine it turns into fraud when you don't tell investors about the human fall backs.
gessha · 5 months ago
The mechanical Turk over and over again

https://en.wikipedia.org/wiki/Mechanical_Turk

chatmasta · 5 months ago
The funny thing is you could probably make money on Amazon Mechanical Turk by hooking it up to an LLM. We’re at this weird limbo point in history where the fraud could go either way, depending on what you think you’re paying for…
makeitdouble · 5 months ago
Mechanical Turk exists because there is a line below which people are cheaper, even for massively parallel tasks.

If the LLM really costs less for the level of tasks that are paid for in MT right now, there sure would be a brief arbitrage period followed by the reajusting of that line I assume (of just MT shutting down if it doesn't make sense anymore)

washadjeffmad · 5 months ago
I was warned and then suspended from MTurk around a decade ago while testing a workflow for audio transcription that worked a little too well. Not sure if the policies are more flexible today, but there was a lot of low hanging fruit back then.
cratermoon · 5 months ago
It's pretty well known that the AI companies are heavy users of Amazon mturk for their RLHF post-training.
ageitgey · 5 months ago
Over the past 5 years, there have been many startups that are variations of "AI can now automate interacting with companies that don't want to interact with you." This is common in healthcare, FinTech, consumer shopping, etc.

There are so many examples:

- We're going to automate provider availability, scheduling and booking hair/doctor/spa/whatever appointments for your users with AI phone calls

- We're going to sell a consumer device you talk to that will automate all your app interactions using "large action models"

- We're going to automate all of your hospital's health insurance company billing interactions with AI screen scrapers

- We're going to record your employees performing an action once in any business software tool and then automate it forever with AI to tie all your vendor systems together without custom programming.

- We're going to be able to buy anything for you from any website, automatically, no matter what fraud checks exist, because AI

Most of these start-ups are not "fraudulent"—they start with the best intentions (qualified tech founders, real target market, customers willing to pay if it works), but they eventually fail, pivot completely, or have to resort to fraud in a misguided attempt to stay alive.

The problem is that they are all using technology to try to solve a human problem. The current state of the world exists because the service provider on the other side of the equation doesn't want to be disintermediated or commoditized. They aren't going to sit there and be automated into compliance. If you perfect a way to call them with robots, they will stop answering the phone. If you perfect a way to automate their iPhone app on behalf of a user, they will block your IP address range and throw up increasingly arcane captchas. If you automate their login flows, they will switch to a different login flow or block customers they think are using automation. Your customer's experience is inconsistent at best, and you can never get rid of the humans in the loop. It leads to death by a thousand paper cuts until you bleed to death - despite customers still begging to pay for your service.

toomuchtodo · 5 months ago
fn-mote · 5 months ago
This should be the headline link.

It contains the details people are asking about, including (to me) what made this actionable fraud: the solicitation of $40MM from investors based on the completely false representation that his company used AI.

jxjnskkzxxhx · 5 months ago
It's funny that "it's a computer but I'll tell people it's a human" and "it's a human but I'll tell people it's a computer" are both commons ideas.