Readit News logoReadit News
michaelteter · 3 months ago
Not excusing this is any way, but this app is apparently a fairly junior effort by university students. While it should make every effort to follow good security (and communication) practices, I'd not be too hard on them considering how some big VC funded "adult" companies behave when presented with similar challenges.

https://georgetownvoice.com/2025/04/06/georgetown-students-c...

tmtvl · 3 months ago
I vehemently disagree. 'Well, they didn't know what they were doing, so we shouldn't judge them too harshly' is a silly thing to say. They didn't know what they were doing _and still went through with it_. That's an aggravating, not extenuating, factor in my book. Kind of like if a driver kills someone in an accident and then turns out not to have a license.
michaelteter · 3 months ago
Still not excusing them, but these HN responses are very hypocritical.

US tech is built on the "go fast, break things" mentality. Companies with huge backers routinely fail at security, and some of them actually spend money to suppress those who expose the companies' poor privacy/security practices.

If anything, college kids could at least reasonably claim ignorance, whereas a lot of HN folks here work for companies who do far worse and get away with it.

Some companies, some unicorns, knowingly and wilfully break laws to get ahead. But they're big, and people are getting rich working for them, so we don't crucify them.

mmanfrin · 3 months ago
> They didn't know what they were doing _and still went through with it_

You don't know what you don't know; sometimes people can think they do know what they're doing and they just haven't encountered situations otherwise. We were all new to programming once; no one would ever become a solid engineer if they prevented themselves from building anything out of fear of doing something wrong that they did not account for out of lack of experience.

dmitrygr · 3 months ago
+1: if you cannot do security, you have no business making dating apps. The kind of data those collect can ruin lives overnight. This is not a theory, here is a recent example: https://www.bbc.com/news/articles/c74nlgyv7r4o
LadyCailin · 3 months ago
This is exactly why I think software engineering should require a licensing requirement, much like civil engineering. I get that people will complain about that destroying all sorts of things, and it might, yes, but fight me. Crap like this is exactly why it should be a requirement, and why you won’t convince me that the idea is not in general a good one.
paulddraper · 3 months ago
The difference is…this isn’t an automobile and the accident isn’t fatal.
johnfn · 3 months ago
But no one was killed here, so your comparison really falls flat to me - there’s a reason we have a sliding scale of punishments that scale to the crime, and security issues are nowhere near the same level of severity as murder. It feels more like fining kids for putting up a lemonade stand without a business license.
voytec · 3 months ago
I've also hit this link trying to get any info on "Cerca". It's from April 2025 and praises app created two months earlier. It looks like a LLM-hallucinated garbage. OP's entry mentions contacting Cerca team in February. So either this entry is about a flaw detected at launch date or some weird scheme.

Nonetheless: "two months old vulnerability" and "two months old students-made app/service".

michaelteter · 3 months ago
Ah that's a shame.

It's hard to tell these days what is real.

Linkedin shows 2024 founded, and 2-10 employees. And that same Linkedin page has a post which directly links to this blurb: https://www.readfeedme.com/p/three-college-seniors-solved-th...

The date of this article is May 2025, and it references an interview with the founders.

barbazoo · 3 months ago
How is one supposed to know that it's just a bunch of script kiddies we shouldn't be too hard on if their apps get released under "Cerca Applications, LLC".
yard2010 · 3 months ago
These guys should probably study something else.
selcuka · 3 months ago
Fair point, but come on. Not returning the OTP (which is supposed to be a secret) in the response payload is common sense, whether you are a seasoned developer or a high school student.

It is also a commercial product, not something they made for fun:

    In-App Purchases
    - Cerca App $9.99
    - Cerca App 3 month $9.99
    - 10 Swipes $2.99
    - 3 Swipes $0.99
    - 5 swipes $1.99
    - 3 Searches $1.99
    - 10 Searches $3.99
    - 5 Searches $2.99

root_axis · 3 months ago
Sadly, it's not common sense. I've worked with dozens of people who just throw arbitrary state into front-end response payloads because they're lazy and just push to the front-end whatever comes from the service API.
cAtte_ · 3 months ago
this stops applying when your cute little app starts storing people's passports
mbs159 · 3 months ago
Although I do agree with the sentiment, you should handle sensitive information if you are not capable of properly doing so.
genewitch · 3 months ago
i have an idea, if you don't know anything about app security, don't make an app. "Whataboutism" not-withstanding, this actually made me feel a little ill, and your comment didn't help. I have younger friends that use dating sites and having their information exposed to whoever wants it is gross, and the people who made it should feel bad.

They should feel bad about not communicating with the "researcher" after the fact, too. If i had been blown off by a "company" after telling them everything was wide open to the world for the taking, the resulting "blog post" would not be so polite.

STOP. MAKING. APPS.

dylan604 · 3 months ago
Stop pushing POCs into PROD.

There's nothing wrong with making your POC/MVP with all of the cool logic that shows what the app will do. That's usually done to gain funding of some sort, but before releasing. Part of the releasing stage should be a revamped/weaponized version of the POC, and not the damn POC itself. The weaponized version should have security stuff added.

That's much better than telling people stop making apps.

imiric · 3 months ago
You're shouting into the void. The people making this type of product have zero regard for their users' data, and best engineering or security practices. They're using AI to pump out a product as quickly as possible, and if it doesn't work (i.e. makes them money), they'll do it again with something else.

This can only be solved by regulation.

rs186 · 3 months ago
There is a point to your comment, but I am afraid you are shouting at the wrong thing.

Instead, I think this is the fair approach: anyone is free to make a website/app/VR world whatever, but if it stores any kind of PII, you had better know what you are doing. The problem is not security. The problem is PII. If someone's AWS key got hacked, leaked and used by others, well it's bad, but that's different from my personal information getting leaked and someone applying for a credit card on my behalf.

ghssds · 3 months ago
Programming should require a gouvernment-emited license reserved to alumni of duly certified schools. Possession of a turing-complete compiler of interpreter without permission should be a felony.
yibg · 3 months ago
End of the day it's an ROI analysis (using the term loosely here, more of a gut feel). What is the cost and benefits of making an app more secure vs pushing out an insecure version faster. Unfortunately in today's business and funding climate, the latter has better pay off (for most things anyways).

Until the balance of incentives changes, I don't see any meaningful change in behavior unfortunately.

Deleted Comment

peterldowns · 3 months ago
I hear you but if you're processing passports and sexual preferences you have to at least respond to the security researcher telling you how you're leaking them to absolutely anyone. This is a total clusterfuck and there are zero excuses for the lack of security here.
imiric · 3 months ago
That sounds like you're excusing them.

You know what else was an app built by university students? The Facebook. We're all familiar with the "dumb fucks" quote, with Meta's long history of abusing their users' PII, and their poor security practices that allowed other companies to abuse it.

So, no. This type of behavior must not be excused, and should ideally be strongly regulated and fined appropriately, regardless of the age or experience of the founders.

SpaceL10n · 3 months ago
I worry about my own liability sometimes as an engineer at a small company. So many businesses operate outside of regulated industries where PCI or HIPAA don't apply. For smaller organizations, security is just an engineering concern - not an organizational mandate. The product team is focused on the features, the PM is focused on the timeline, QA is focused on finding bugs, and it goes on and on, but rarely is there a voice of reason speaking about security. Engineers are expected to deliver tasks on the board and litte else. If the engineers can make the product secure without hurting the timeline, then great. If not, the engineers end up catching heat from the PM or whomever.

They'll say things like...

"Well, how long will that take?"

or, "What's really the risk of that happening?"

or, "We can secure it later, let's just get the MVP out to the customer now"

So, as an employee, I do what my employer asks of me. But, if somebody sues my employer because of some hack or data breach, am I going to be personally liable because I'm the only one who "should have known better"?

SoftTalker · 3 months ago
You're not really an engineer. You won't be signing any design plans certifying their safety, and you won't be liable when it's proven that they aren't safe.
kohbo · 3 months ago
Depends on your industry. Even if SWE's aren't out here getting PE's there is absolutely someone signing off on all things safety-related.
marcellus23 · 3 months ago
> engineer

> noun

> a person who designs, builds, or maintains engines, machines, or public works.

pixl97 · 3 months ago
If it's an LLC/Corp you should be protected by the corporate veil unless you've otherwise documented you're committing criminal behavior.

But yea, the lack of security standards across organizations of all sizes is pitiful. Releasing new features always seems to come before ensuring good security practices.

sailfast · 3 months ago
I would personally want to know the law enough to protect myself, push back on anything illegal in writing, and then get written approval to disregard to be totally covered - but I understand that even this can be hard if you’re one or two devs deep at a startup or whatever. Personally, if I didn’t think they were pursuing legal work I’d leave.
remus · 3 months ago
As an engineer I'm a small org I think it's our responsibility to educate the rest of the team about these risks and push to make sure they get engineering time to mitigate these issues. It's not easy, but it's important stuff that could sink the business if it's not taken seriously.
kelnos · 3 months ago
As much as I despise the "I was just following orders" defense, do make sure you get anything like that in writing: an email trail where you raise your concerns about the lack of security, with a response from a boss saying not to bother with it.

Not sure where you are located, but I don't know of any case where an individual rank-and-file employee has been held legally responsible for a data breach. (Hell, usually no one suffers any consequences for data breaches. At most the company suffers a token fine and they move on without caring.

hnlmorg · 3 months ago
> do make sure you get anything like that in writing: an email trail where you raise your concerns about the lack of security, with a response from a boss saying not to bother with it.

A few years ago I was put in the situation where I needed to do this and it created a major shitstorm.

“I’m not putting that in writing” they said.

However it did have the desired effect and they backed down.

You do need to be super comfortable with your position in the company to pull that stunt though. This was for a UK firm and I was managing a team of DevOps engineers. So I had quite a bit of respect in the wider company as well as stronger employment rights. I doubt I’d have pulled this stunt if I was a much more replaceable software engineer in an American startup. And particularly not in the current job climate.

hiatus · 3 months ago
Are you an officer of the company? If not I would not think you could be personally liable.
yieldcrv · 3 months ago
not in my experience
andrelaszlo · 3 months ago
Oops! Nice find!

To limit his legal exposure as a researcher, I think it would have been enough to create a second account (or ask a friend to create a profile and get their consent to access it).

You don't have to actually scrape the data to prove that there's an enumeration issue. Say your id is 12345, and your friend signs up and gets id 12357 - that should be enough to prove that you can find the id and access the profile of any user.

As others have said, accessing that much PII of other users is not necessary for verifying and disclosing the vulnerability.

ofjcihen · 3 months ago
This is the standard and obvious way to go about things that most security researchers ignore.

While you can definitely want PII protected and scrape data to prove a point it’s unnecessary and hypocritical.

strunz · 3 months ago
Eh, part of assessing the vulnerability is how deep it goes. Showing that there were no gates or roadblocks to accessing all the data is a valid thing to research, otherwise they can later say "oh we hade rate limiting in place" or "we had network vulnerability scanners which would've prevented a wholesale leak".
mtlynch · 3 months ago
This is a pretty confusing writeup.

>First things first, let’s log in. They only use OTP-based sign in (just text a code to your phone number), so I went to check the response from triggering the one-time password. BOOM – the OTP is directly in the response, meaning anyone’s account can be accessed with just their phone number.

They don't explain it, but I'm assuming that the API is something like api.cercadating.com/otp/<phone-number>, so you can guess phone numbers and get OTP codes even if you don't control the phone numbers.

>The script basically just counted how many valid users it saw; if after 1,000 consecutive IDs it found none, then it stopped. So there could be more out there (Cerca themselves claimed 10k users in the first week), but I was able to find 6,117 users, 207 who had put their ID information in, and 19 who claimed to be Yale students.

I don't know if the author realizes how risky this is, but this is basically what weev did to breach AT&T, and he went to prison for it.[0] Granted, that was a much bigger company and a larger breach, but I still wouldn't boast publicly about exploiting a security hole and accessing the data of thousands of users without authorization.

I'm not judging the morality, as I think there should be room for security researchers to raise alarms, but I don't know if the author realizes that the law is very much biased against security researchers.

[0] https://en.wikipedia.org/wiki/Goatse_Security#AT&T/iPad_emai...

lima · 3 months ago
> They don't explain it, but I'm assuming that the API is something like api.cercadating.com/otp/<phone-number>, so you can guess phone numbers and get OTP codes even if you don't control the phone numbers.

They mention guessing phone numbers, and then the API call for sending the OTP... literally just returns the OTP.

mtlynch · 3 months ago
Yeah, I guess there's no reason for the API to ever return the OTP, but the severity depends on how you call the API. If the API is `api.cercadating.com/otp/<unpredictable-40-character-token>`, then that's not so bad. If it's `api.cercadating.com/otp/<guessable four-digit number>` that's a different story.

From context, I assume it's closer to the latter, but it would have been helpful for the author to explain it a bit better.

tptacek · 3 months ago
Read the original complaint in the Auernheimer case. Prosecutors had (extensive) intent evidence that is unlikely to exist here. The defendants in that case were also accused of disclosing the underlying PII, which is not something that appears to have happened here.
SoftTalker · 3 months ago
I was going to say the headline of the post, "I hacked..." could almost be taken as a confession. But that's not the actual title of the linked article. I'm almost tempted to flag this submission for clickbait embellishment in the title.
mtlynch · 3 months ago
Yeah, I agree Auernheimer was a much more attractive target for prosecution, but do you think this student is legally safe in what they're doing here?
shayanbahal · 3 months ago
I had a similar experience with another dating app, although they never got back to me. When I tried to get the founders attention by changing his bio to contact me text, they restored a backup lol

years later I saw their instagram ad and tried to see if the issue still exists, and yes it did. Basically anyone with the knowledge of their API endpoints (which is easy to find using the app-proxy-server) you have full on admin capabilities and access to all messages, matching, etc.

I wonder if I should go back and try again... :-?

cobalt60 · 3 months ago
Why not disclose it as a responsible dev with contacts and move on.
pixl97 · 3 months ago
If a company is not responsible enough to follow up on security reports you should not follow up, but instead disclose it to the world.
nixpulvis · 3 months ago
People need to be forced to think twice before taking in such sensitive information as a passport or even just addresses. This sort of thing cannot be allowed to be brushed off as just a bunch of kids making an app.
kelnos · 3 months ago
And for things like passport or other ID details, there's also no reason to expose them publicly at all after they've been entered. If you want an API available to fetch the data so you can display it in the UI, there's no need to include the full passport/ID number; at the very least it can be obscured with only the last few digits sent back via the API.

But for something like a dating site, It's enough for the API to just return a boolean verified/not-verified for the ID status (or an enum of something like 'not-verified', 'passport', 'drivers-license', etc.). There's no real need to display any of the details to the client/UI.

(In contrast with, say, and airline app where you need to select an identity document for immigration purposes, where you'd want to give the user more details so they can make the choice. But even then, as they do in the United app, they only show the last few digits of the passport number... hopefully that's all that's sent over their internal API as well.)

VBprogrammer · 3 months ago
The UK government are trying really hard to mandate IDs for access to porn sites. Can't wait for that to blow up in their faces.
pixl97 · 3 months ago
"They" don't care, the entire point of many of these laws is to increase the friction and fear of being disclosed that you don't visit these sites in the first place.
webninja · 3 months ago
Generally speaking, the UK government doesn’t care about it's citizens. That’s why so many left to the USA for a better life.
jonny_eh · 3 months ago
There should to be some kind of government operated identity confirmation service that is secure/private.

Or by someone "government-like" such as Apple or Google.

clifflocked · 3 months ago
OAuth exists and can be used to confirm someone's identity by linking their Google account.
steeeeeve · 3 months ago
Government is the worst possible solution to every problem.

(not an attack on you. I have to say that every time I see someone say anything along the lines of "the government should do it")

nixpulvis · 3 months ago
I would rather see an FDPA (Federal Data Protection Administration) which goes after people who get this stuff wrong.
behringer · 3 months ago
when I worked for the government, within 2 months they had leaked all of my data to the black market.

Governments should not be confirming shit.

vincvinc · 3 months ago
nixpulvis · 3 months ago
The US desperately needs similar legislation.
koakuma-chan · 3 months ago
Were they not using some kind of third party identity verification service? That's what I usually see apps do. Don't tell me those third party services still share your ID with the app (like the actual images)?
nixpulvis · 3 months ago
Read the article. They clearly have their own OTP setup.

But if they are asking for your passport, then they have access to it. It's not a third party asking and providing them with some checkmark or other reduced risk data.

blantonl · 3 months ago
Returning the OTP in the request API response is wild. Like why?
MBCook · 3 months ago
So the UI can check if what they enter is correct.

It’s very sensible and an obvious solution if you don’t think about the security of it.

A dating app is one of the most dangerous kinds of app to make due to all the necessary PII. this is horrible.

ryanisnan · 3 months ago
> if you don’t think about the security of it.

This is big brain energy. Why bother needing to make yet another round trip request when you can just defer that nonsense to the client!

benmmurphy · 3 months ago
I’ve seen banks where the OTP code is generated on the client and then sent to the server.
pydry · 3 months ago
Smacks of vibe coding
hectormalot · 3 months ago
One reason I could think of is that they may return the database (or cache, or something else) response after generating and storing the OTP. Quick POCs/MVPs often use their storage models for API responses to save time, and then it is an easy oversight...
oulu2006 · 3 months ago
that's my first thought at as well - like a basic CRUD operation that returns the row that was created as a response.
matja · 3 months ago
Eliminate your database costs with this one easy trick!
ceejayoz · 3 months ago
Save a HTTP request, and faster UX! What's not to love?

When Pinterest's new API was released, they were spewing out everything about a user to any app using their OAuth integration, including their 2FA secrets. We reported and got a bounty, but this sort of shit winds up in big companies' APIs, who really should know better.

gwbas1c · 3 months ago
It appears that the OTP is sent from "the response from triggering the one-time password".

I suspect it's a framework thing; they're probably directly serializing an object that's put in the database (ORM or other storage system) to what's returned via HTTP.

mooreds · 3 months ago
I too am bewildered.

Maybe to make it easier to build the form accepting the OTP? Oversight?

I can't think of any other reasons.

Vuska · 3 months ago
Oversight. Frameworks tend to make it easy to make an API endpoint by casting your model to JSON or something, but it's easy to forget you need to make specific fields hidden.
Alex-Programs · 3 months ago
I assume that whoever wrote it just has absolutely no mental model of security, has never been on the attacking side or realised that clients can't be trusted, and only implemented the OTP authentication because they were "going through the motions" that they'd seen other people implement.
ksala_ · 3 months ago
My best guess would be some form of testing before they added sending the "sending a message" part to the API. Build the OTP logic, the scaffolding... and add a way to make sure it returns what you expect. But yes absolutely wild.
bearsyankees · 3 months ago
ahaferburg · 3 months ago
> Developers should use best practices, but they may not be sufficient, he added. “Keeping data secure is an unsolved problem,” he said.

Oh, ok. Too bad, I guess.