Not excusing this is any way, but this app is apparently a fairly junior effort by university students. While it should make every effort to follow good security (and communication) practices, I'd not be too hard on them considering how some big VC funded "adult" companies behave when presented with similar challenges.
I vehemently disagree. 'Well, they didn't know what they were doing, so we shouldn't judge them too harshly' is a silly thing to say. They didn't know what they were doing _and still went through with it_. That's an aggravating, not extenuating, factor in my book. Kind of like if a driver kills someone in an accident and then turns out not to have a license.
Still not excusing them, but these HN responses are very hypocritical.
US tech is built on the "go fast, break things" mentality. Companies with huge backers routinely fail at security, and some of them actually spend money to suppress those who expose the companies' poor privacy/security practices.
If anything, college kids could at least reasonably claim ignorance, whereas a lot of HN folks here work for companies who do far worse and get away with it.
Some companies, some unicorns, knowingly and wilfully break laws to get ahead. But they're big, and people are getting rich working for them, so we don't crucify them.
> They didn't know what they were doing _and still went through with it_
You don't know what you don't know; sometimes people can think they do know what they're doing and they just haven't encountered situations otherwise. We were all new to programming once; no one would ever become a solid engineer if they prevented themselves from building anything out of fear of doing something wrong that they did not account for out of lack of experience.
+1: if you cannot do security, you have no business making dating apps. The kind of data those collect can ruin lives overnight. This is not a theory, here is a recent example: https://www.bbc.com/news/articles/c74nlgyv7r4o
This is exactly why I think software engineering should require a licensing requirement, much like civil engineering. I get that people will complain about that destroying all sorts of things, and it might, yes, but fight me. Crap like this is exactly why it should be a requirement, and why you won’t convince me that the idea is not in general a good one.
But no one was killed here, so your comparison really falls flat to me - there’s a reason we have a sliding scale of punishments that scale to the crime, and security issues are nowhere near the same level of severity as murder. It feels more like fining kids for putting up a lemonade stand without a business license.
I've also hit this link trying to get any info on "Cerca". It's from April 2025 and praises app created two months earlier. It looks like a LLM-hallucinated garbage. OP's entry mentions contacting Cerca team in February. So either this entry is about a flaw detected at launch date or some weird scheme.
Nonetheless: "two months old vulnerability" and "two months old students-made app/service".
How is one supposed to know that it's just a bunch of script kiddies we shouldn't be too hard on if their apps get released under "Cerca Applications, LLC".
Fair point, but come on. Not returning the OTP (which is supposed to be a secret) in the response payload is common sense, whether you are a seasoned developer or a high school student.
It is also a commercial product, not something they made for fun:
Sadly, it's not common sense. I've worked with dozens of people who just throw arbitrary state into front-end response payloads because they're lazy and just push to the front-end whatever comes from the service API.
i have an idea, if you don't know anything about app security, don't make an app. "Whataboutism" not-withstanding, this actually made me feel a little ill, and your comment didn't help. I have younger friends that use dating sites and having their information exposed to whoever wants it is gross, and the people who made it should feel bad.
They should feel bad about not communicating with the "researcher" after the fact, too. If i had been blown off by a "company" after telling them everything was wide open to the world for the taking, the resulting "blog post" would not be so polite.
There's nothing wrong with making your POC/MVP with all of the cool logic that shows what the app will do. That's usually done to gain funding of some sort, but before releasing. Part of the releasing stage should be a revamped/weaponized version of the POC, and not the damn POC itself. The weaponized version should have security stuff added.
That's much better than telling people stop making apps.
You're shouting into the void. The people making this type of product have zero regard for their users' data, and best engineering or security practices. They're using AI to pump out a product as quickly as possible, and if it doesn't work (i.e. makes them money), they'll do it again with something else.
There is a point to your comment, but I am afraid you are shouting at the wrong thing.
Instead, I think this is the fair approach: anyone is free to make a website/app/VR world whatever, but if it stores any kind of PII, you had better know what you are doing. The problem is not security. The problem is PII. If someone's AWS key got hacked, leaked and used by others, well it's bad, but that's different from my personal information getting leaked and someone applying for a credit card on my behalf.
Programming should require a gouvernment-emited license reserved to alumni of duly certified schools. Possession of a turing-complete compiler of interpreter without permission should be a felony.
End of the day it's an ROI analysis (using the term loosely here, more of a gut feel). What is the cost and benefits of making an app more secure vs pushing out an insecure version faster. Unfortunately in today's business and funding climate, the latter has better pay off (for most things anyways).
Until the balance of incentives changes, I don't see any meaningful change in behavior unfortunately.
I hear you but if you're processing passports and sexual preferences you have to at least respond to the security researcher telling you how you're leaking them to absolutely anyone. This is a total clusterfuck and there are zero excuses for the lack of security here.
You know what else was an app built by university students? The Facebook. We're all familiar with the "dumb fucks" quote, with Meta's long history of abusing their users' PII, and their poor security practices that allowed other companies to abuse it.
So, no. This type of behavior must not be excused, and should ideally be strongly regulated and fined appropriately, regardless of the age or experience of the founders.
I worry about my own liability sometimes as an engineer at a small company. So many businesses operate outside of regulated industries where PCI or HIPAA don't apply. For smaller organizations, security is just an engineering concern - not an organizational mandate. The product team is focused on the features, the PM is focused on the timeline, QA is focused on finding bugs, and it goes on and on, but rarely is there a voice of reason speaking about security. Engineers are expected to deliver tasks on the board and litte else. If the engineers can make the product secure without hurting the timeline, then great. If not, the engineers end up catching heat from the PM or whomever.
They'll say things like...
"Well, how long will that take?"
or, "What's really the risk of that happening?"
or, "We can secure it later, let's just get the MVP out to the customer now"
So, as an employee, I do what my employer asks of me. But, if somebody sues my employer because of some hack or data breach, am I going to be personally liable because I'm the only one who "should have known better"?
You're not really an engineer. You won't be signing any design plans certifying their safety, and you won't be liable when it's proven that they aren't safe.
If it's an LLC/Corp you should be protected by the corporate veil unless you've otherwise documented you're committing criminal behavior.
But yea, the lack of security standards across organizations of all sizes is pitiful. Releasing new features always seems to come before ensuring good security practices.
I would personally want to know the law enough to protect myself, push back on anything illegal in writing, and then get written approval to disregard to be totally covered - but I understand that even this can be hard if you’re one or two devs deep at a startup or whatever. Personally, if I didn’t think they were pursuing legal work I’d leave.
As an engineer I'm a small org I think it's our responsibility to educate the rest of the team about these risks and push to make sure they get engineering time to mitigate these issues. It's not easy, but it's important stuff that could sink the business if it's not taken seriously.
As much as I despise the "I was just following orders" defense, do make sure you get anything like that in writing: an email trail where you raise your concerns about the lack of security, with a response from a boss saying not to bother with it.
Not sure where you are located, but I don't know of any case where an individual rank-and-file employee has been held legally responsible for a data breach. (Hell, usually no one suffers any consequences for data breaches. At most the company suffers a token fine and they move on without caring.
> do make sure you get anything like that in writing: an email trail where you raise your concerns about the lack of security, with a response from a boss saying not to bother with it.
A few years ago I was put in the situation where I needed to do this and it created a major shitstorm.
“I’m not putting that in writing” they said.
However it did have the desired effect and they backed down.
You do need to be super comfortable with your position in the company to pull that stunt though. This was for a UK firm and I was managing a team of DevOps engineers. So I had quite a bit of respect in the wider company as well as stronger employment rights. I doubt I’d have pulled this stunt if I was a much more replaceable software engineer in an American startup. And particularly not in the current job climate.
To limit his legal exposure as a researcher, I think it would have been enough to create a second account (or ask a friend to create a profile and get their consent to access it).
You don't have to actually scrape the data to prove that there's an enumeration issue. Say your id is 12345, and your friend signs up and gets id 12357 - that should be enough to prove that you can find the id and access the profile of any user.
As others have said, accessing that much PII of other users is not necessary for verifying and disclosing the vulnerability.
Eh, part of assessing the vulnerability is how deep it goes. Showing that there were no gates or roadblocks to accessing all the data is a valid thing to research, otherwise they can later say "oh we hade rate limiting in place" or "we had network vulnerability scanners which would've prevented a wholesale leak".
>First things first, let’s log in. They only use OTP-based sign in (just text a code to your phone number), so I went to check the response from triggering the one-time password. BOOM – the OTP is directly in the response, meaning anyone’s account can be accessed with just their phone number.
They don't explain it, but I'm assuming that the API is something like api.cercadating.com/otp/<phone-number>, so you can guess phone numbers and get OTP codes even if you don't control the phone numbers.
>The script basically just counted how many valid users it saw; if after 1,000 consecutive IDs it found none, then it stopped. So there could be more out there (Cerca themselves claimed 10k users in the first week), but I was able to find 6,117 users, 207 who had put their ID information in, and 19 who claimed to be Yale students.
I don't know if the author realizes how risky this is, but this is basically what weev did to breach AT&T, and he went to prison for it.[0] Granted, that was a much bigger company and a larger breach, but I still wouldn't boast publicly about exploiting a security hole and accessing the data of thousands of users without authorization.
I'm not judging the morality, as I think there should be room for security researchers to raise alarms, but I don't know if the author realizes that the law is very much biased against security researchers.
> They don't explain it, but I'm assuming that the API is something like api.cercadating.com/otp/<phone-number>, so you can guess phone numbers and get OTP codes even if you don't control the phone numbers.
They mention guessing phone numbers, and then the API call for sending the OTP... literally just returns the OTP.
Yeah, I guess there's no reason for the API to ever return the OTP, but the severity depends on how you call the API. If the API is `api.cercadating.com/otp/<unpredictable-40-character-token>`, then that's not so bad. If it's `api.cercadating.com/otp/<guessable four-digit number>` that's a different story.
From context, I assume it's closer to the latter, but it would have been helpful for the author to explain it a bit better.
Read the original complaint in the Auernheimer case. Prosecutors had (extensive) intent evidence that is unlikely to exist here. The defendants in that case were also accused of disclosing the underlying PII, which is not something that appears to have happened here.
I was going to say the headline of the post, "I hacked..." could almost be taken as a confession. But that's not the actual title of the linked article. I'm almost tempted to flag this submission for clickbait embellishment in the title.
I had a similar experience with another dating app, although they never got back to me. When I tried to get the founders attention by changing his bio to contact me text, they restored a backup lol
years later I saw their instagram ad and tried to see if the issue still exists, and yes it did. Basically anyone with the knowledge of their API endpoints (which is easy to find using the app-proxy-server) you have full on admin capabilities and access to all messages, matching, etc.
People need to be forced to think twice before taking in such sensitive information as a passport or even just addresses. This sort of thing cannot be allowed to be brushed off as just a bunch of kids making an app.
And for things like passport or other ID details, there's also no reason to expose them publicly at all after they've been entered. If you want an API available to fetch the data so you can display it in the UI, there's no need to include the full passport/ID number; at the very least it can be obscured with only the last few digits sent back via the API.
But for something like a dating site, It's enough for the API to just return a boolean verified/not-verified for the ID status (or an enum of something like 'not-verified', 'passport', 'drivers-license', etc.). There's no real need to display any of the details to the client/UI.
(In contrast with, say, and airline app where you need to select an identity document for immigration purposes, where you'd want to give the user more details so they can make the choice. But even then, as they do in the United app, they only show the last few digits of the passport number... hopefully that's all that's sent over their internal API as well.)
"They" don't care, the entire point of many of these laws is to increase the friction and fear of being disclosed that you don't visit these sites in the first place.
Were they not using some kind of third party identity verification service? That's what I usually see apps do. Don't tell me those third party services still share your ID with the app (like the actual images)?
Read the article. They clearly have their own OTP setup.
But if they are asking for your passport, then they have access to it. It's not a third party asking and providing them with some checkmark or other reduced risk data.
One reason I could think of is that they may return the database (or cache, or something else) response after generating and storing the OTP. Quick POCs/MVPs often use their storage models for API responses to save time, and then it is an easy oversight...
Save a HTTP request, and faster UX! What's not to love?
When Pinterest's new API was released, they were spewing out everything about a user to any app using their OAuth integration, including their 2FA secrets. We reported and got a bounty, but this sort of shit winds up in big companies' APIs, who really should know better.
It appears that the OTP is sent from "the response from triggering the one-time password".
I suspect it's a framework thing; they're probably directly serializing an object that's put in the database (ORM or other storage system) to what's returned via HTTP.
Oversight. Frameworks tend to make it easy to make an API endpoint by casting your model to JSON or something, but it's easy to forget you need to make specific fields hidden.
I assume that whoever wrote it just has absolutely no mental model of security, has never been on the attacking side or realised that clients can't be trusted, and only implemented the OTP authentication because they were "going through the motions" that they'd seen other people implement.
My best guess would be some form of testing before they added sending the "sending a message" part to the API. Build the OTP logic, the scaffolding... and add a way to make sure it returns what you expect. But yes absolutely wild.
https://georgetownvoice.com/2025/04/06/georgetown-students-c...
US tech is built on the "go fast, break things" mentality. Companies with huge backers routinely fail at security, and some of them actually spend money to suppress those who expose the companies' poor privacy/security practices.
If anything, college kids could at least reasonably claim ignorance, whereas a lot of HN folks here work for companies who do far worse and get away with it.
Some companies, some unicorns, knowingly and wilfully break laws to get ahead. But they're big, and people are getting rich working for them, so we don't crucify them.
You don't know what you don't know; sometimes people can think they do know what they're doing and they just haven't encountered situations otherwise. We were all new to programming once; no one would ever become a solid engineer if they prevented themselves from building anything out of fear of doing something wrong that they did not account for out of lack of experience.
Nonetheless: "two months old vulnerability" and "two months old students-made app/service".
It's hard to tell these days what is real.
Linkedin shows 2024 founded, and 2-10 employees. And that same Linkedin page has a post which directly links to this blurb: https://www.readfeedme.com/p/three-college-seniors-solved-th...
The date of this article is May 2025, and it references an interview with the founders.
It is also a commercial product, not something they made for fun:
They should feel bad about not communicating with the "researcher" after the fact, too. If i had been blown off by a "company" after telling them everything was wide open to the world for the taking, the resulting "blog post" would not be so polite.
STOP. MAKING. APPS.
There's nothing wrong with making your POC/MVP with all of the cool logic that shows what the app will do. That's usually done to gain funding of some sort, but before releasing. Part of the releasing stage should be a revamped/weaponized version of the POC, and not the damn POC itself. The weaponized version should have security stuff added.
That's much better than telling people stop making apps.
This can only be solved by regulation.
Instead, I think this is the fair approach: anyone is free to make a website/app/VR world whatever, but if it stores any kind of PII, you had better know what you are doing. The problem is not security. The problem is PII. If someone's AWS key got hacked, leaked and used by others, well it's bad, but that's different from my personal information getting leaked and someone applying for a credit card on my behalf.
Until the balance of incentives changes, I don't see any meaningful change in behavior unfortunately.
Deleted Comment
You know what else was an app built by university students? The Facebook. We're all familiar with the "dumb fucks" quote, with Meta's long history of abusing their users' PII, and their poor security practices that allowed other companies to abuse it.
So, no. This type of behavior must not be excused, and should ideally be strongly regulated and fined appropriately, regardless of the age or experience of the founders.
They'll say things like...
"Well, how long will that take?"
or, "What's really the risk of that happening?"
or, "We can secure it later, let's just get the MVP out to the customer now"
So, as an employee, I do what my employer asks of me. But, if somebody sues my employer because of some hack or data breach, am I going to be personally liable because I'm the only one who "should have known better"?
> noun
> a person who designs, builds, or maintains engines, machines, or public works.
But yea, the lack of security standards across organizations of all sizes is pitiful. Releasing new features always seems to come before ensuring good security practices.
Not sure where you are located, but I don't know of any case where an individual rank-and-file employee has been held legally responsible for a data breach. (Hell, usually no one suffers any consequences for data breaches. At most the company suffers a token fine and they move on without caring.
A few years ago I was put in the situation where I needed to do this and it created a major shitstorm.
“I’m not putting that in writing” they said.
However it did have the desired effect and they backed down.
You do need to be super comfortable with your position in the company to pull that stunt though. This was for a UK firm and I was managing a team of DevOps engineers. So I had quite a bit of respect in the wider company as well as stronger employment rights. I doubt I’d have pulled this stunt if I was a much more replaceable software engineer in an American startup. And particularly not in the current job climate.
To limit his legal exposure as a researcher, I think it would have been enough to create a second account (or ask a friend to create a profile and get their consent to access it).
You don't have to actually scrape the data to prove that there's an enumeration issue. Say your id is 12345, and your friend signs up and gets id 12357 - that should be enough to prove that you can find the id and access the profile of any user.
As others have said, accessing that much PII of other users is not necessary for verifying and disclosing the vulnerability.
While you can definitely want PII protected and scrape data to prove a point it’s unnecessary and hypocritical.
>First things first, let’s log in. They only use OTP-based sign in (just text a code to your phone number), so I went to check the response from triggering the one-time password. BOOM – the OTP is directly in the response, meaning anyone’s account can be accessed with just their phone number.
They don't explain it, but I'm assuming that the API is something like api.cercadating.com/otp/<phone-number>, so you can guess phone numbers and get OTP codes even if you don't control the phone numbers.
>The script basically just counted how many valid users it saw; if after 1,000 consecutive IDs it found none, then it stopped. So there could be more out there (Cerca themselves claimed 10k users in the first week), but I was able to find 6,117 users, 207 who had put their ID information in, and 19 who claimed to be Yale students.
I don't know if the author realizes how risky this is, but this is basically what weev did to breach AT&T, and he went to prison for it.[0] Granted, that was a much bigger company and a larger breach, but I still wouldn't boast publicly about exploiting a security hole and accessing the data of thousands of users without authorization.
I'm not judging the morality, as I think there should be room for security researchers to raise alarms, but I don't know if the author realizes that the law is very much biased against security researchers.
[0] https://en.wikipedia.org/wiki/Goatse_Security#AT&T/iPad_emai...
They mention guessing phone numbers, and then the API call for sending the OTP... literally just returns the OTP.
From context, I assume it's closer to the latter, but it would have been helpful for the author to explain it a bit better.
years later I saw their instagram ad and tried to see if the issue still exists, and yes it did. Basically anyone with the knowledge of their API endpoints (which is easy to find using the app-proxy-server) you have full on admin capabilities and access to all messages, matching, etc.
I wonder if I should go back and try again... :-?
But for something like a dating site, It's enough for the API to just return a boolean verified/not-verified for the ID status (or an enum of something like 'not-verified', 'passport', 'drivers-license', etc.). There's no real need to display any of the details to the client/UI.
(In contrast with, say, and airline app where you need to select an identity document for immigration purposes, where you'd want to give the user more details so they can make the choice. But even then, as they do in the United app, they only show the last few digits of the passport number... hopefully that's all that's sent over their internal API as well.)
Or by someone "government-like" such as Apple or Google.
(not an attack on you. I have to say that every time I see someone say anything along the lines of "the government should do it")
Governments should not be confirming shit.
https://en.wikipedia.org/wiki/General_Data_Protection_Regula...
But if they are asking for your passport, then they have access to it. It's not a third party asking and providing them with some checkmark or other reduced risk data.
It’s very sensible and an obvious solution if you don’t think about the security of it.
A dating app is one of the most dangerous kinds of app to make due to all the necessary PII. this is horrible.
This is big brain energy. Why bother needing to make yet another round trip request when you can just defer that nonsense to the client!
When Pinterest's new API was released, they were spewing out everything about a user to any app using their OAuth integration, including their 2FA secrets. We reported and got a bounty, but this sort of shit winds up in big companies' APIs, who really should know better.
I suspect it's a framework thing; they're probably directly serializing an object that's put in the database (ORM or other storage system) to what's returned via HTTP.
Maybe to make it easier to build the form accepting the OTP? Oversight?
I can't think of any other reasons.
^another article on this
Oh, ok. Too bad, I guess.