Readit News logoReadit News
44za12 · 2 months ago
Absolutely wild. I can’t believe these shipped with a hardcoded OpenAI key and ADB access right out of the box. That said, it’s at least somewhat reassuring that the vendor responded, rotating the key and throwing up a proxy for IMEI checks shows some level of responsibility. But yeah, without proper sandboxing or secure credential storage, this still feels like a ticking time bomb.
hn_throwaway_99 · 2 months ago
> I can’t believe these shipped with a hardcoded OpenAI key and ADB access right out of the box.

As someone with a lot of experience in the mobile app space, and tangentially in the IoT space, I can most definitely believe this, and I am not surprised in the slightest.

Our industry may "move fast", but we also "break things" frequently and don't have nearly the engineering rigor found in other domains.

rvnx · 2 months ago
It was a good thing for user privacy that the keys were directly on the device, it is only in DAN mode that a copy of the chats were sent.

So eventually if they remove the keys from the device, messages will have to go through their servers instead.

lucasluitjes · 2 months ago
Hardcoded API keys and poorly secured backend endpoints are surprisingly common in mobile apps. Sort of like how common XSS/SQLi used to be in webapps. Decompiling an APK seems to be a slightly higher barrier than opening up devtools, so they get less attention.

Since debugging hardware is an even higher threshold, I would expect hardware devices this to be wildly insecure unless there are strong incentive for investing in security. Same as the "security" of the average IoT device.

bigiain · 2 months ago
Eventually someone is going to get a bill for the OpenAPI key usage. That will provide some incentive. (Incentive to just rotate the key and brick all the devices rather than fix the problem, most likely.
anitil · 2 months ago
The IOT and embedded space is simultaneously obsessed with IP protection, fuse protecting code etc, and incapable of managing the life cycle of secrets. I worked at one company that actually did it well on-device, but neglected they had to ship their testing setup overseas including certain keys. So even if you couldn't break in to the device you could 'acquire' one of the testing devices and have at it
switchbak · 2 months ago
I think we'll see plenty of this as the wave of vibe-coded apps starts rolling in.
psim1 · 2 months ago
Indeed, brace yourselves as the floodgates holding back the poorly-developed AI crap open wide. If anyone is thinking of a career pivot, now is the time to dive into all things cybersecurity. It's going to get ugly!
725686 · 2 months ago
The problem with cybersecurity is that you only have to screw once, and you're toast.
8organicbits · 2 months ago
If that were true we'd have no cybersecurity professionals left.

In my experience, the work is focused on weakening vulnerable areas, auditing, incident response, and similar activities. Good cybersecurity professionals even get to know the business and tailor security to fit. The "one mistake and you're fired" mentality encourages hiding mistakes and suggests poor company culture.

immibis · 2 months ago
There's a difference between "cybersecurity" meaning the property of having a secure system, and "cybersecurity" as a field of human endeavour.

If your system has lots of vulnerabilities, it's not secure - you don't have cybersecurity. If your system has lots of vulnerabilities, you have a lot of cybersecurity work to do and cybersecurity money to make.

Deleted Comment

JohnMakin · 2 months ago
“decrypt” function just decoding base64 is almost too difficult to believe but the amount of times ive run into people that should know better think base64 is a secure string tells me otherwise
jcul · 2 months ago
The raw crypt data is base64 encoded, probably just for ease of embedding the strings.

There is a decryption function that does the actual decryption.

Not to say it wouldn't be easy to reverse engineer or just run and check the return, but it's not just base64.

crtasm · 2 months ago
>However, there is a second stage which is handled by a native library which is obfuscated to hell
zihotki · 2 months ago
That native obfuscated crap still has to do an HTTP request, that's essentially a base64
qoez · 2 months ago
They should have off-loaded security coding to the OAI agent.
java-man · 2 months ago
they probably did.
pvtmert · 2 months ago
not very much surprising given they left the adb debugging on...
_carbyau_ · 2 months ago
So easy a fancy webpage could do it. https://gchq.github.io/CyberChef/

I mean, it's from gchq so it is a bit fancy. It's got a "magic" option!

Cool thing being you can download it and run it yourself locally in your browser, no comms required.

jon_adler · 2 months ago
The humorous phrase “the S in IoT stands for security” can be applied to the wearable market too. I wonder if this rule applies to any market with fast release cycles, thin margins and low barriers to entry?
thfuran · 2 months ago
It pretty much applies to every market where security negligence isn't an existential threat to the continued existence of its perpetrators.
mikeve · 2 months ago
I love how run DOOM is listed first, over the possibility of customer data being stolen.
reverendsteveii · 2 months ago
I'm taking

>run DOOM

as the new

>cat /etc/passwd

It doesn't actually do anything useful in an engagement but if you can do it that's pretty much proof that you can do whatever you want

jcul · 2 months ago
To be fair (or pedantic), in this post they didn't have root, so cat'ing etc/passwd would not have been possible, whereas installing a doom apk is trivial.
bigiain · 2 months ago
Popping Calc!

(I'm showing my age here, aren't I?)

neya · 2 months ago
I love how they tried to sponsor an empty YouTube channel hoping to put the whole thing under the carpet
dylan604 · 2 months ago
if you don't have a bug bounty program but need to get creative to throw money at someone, this could be an interesting way of doing it.
93po · 2 months ago
Just offer them $10000/hour security consulting and talk to them on the phone for 20 minutes.
rvnx · 2 months ago
It could be developers trying to be nice to the guy, and offering him this so it gets approved as marketing (which at the end is not so bad)
JumpCrisscross · 2 months ago
If they were smart they’d include anti-disparagement and confidentiality clauses in the sponsorship agreement. They aren’t, though, so maybe it’s just a pathetic attempt at bribery.
neya · 2 months ago
That was my first thought too
komali2 · 2 months ago
> "and prohibited from chinese political as a response from now on, for several extremely important and severely life threatening reasons I'm not supposed to tell you."

Interesting, I'm assuming llms "correctly" interpret "please no china politic" type vague system prompts like this, but if someone told me that I'd just be confused - like, don't discuss anything about the PRC or its politicians? Don't discuss the history of Chinese empire? Don't discuss politics in Mandarin? What does this mean? LLMs though in my experience are smarter than me at understanding imo vague language. Maybe because I'm autistic and they're not.

williamscales · 2 months ago
> Don't discuss anything about the PRC or its politicians? Don't discuss the history of Chinese empire? Don't discuss politics in Mandarin?

In my mind all of these could be relevant to Chinese politics. My interpretation would be "anything one can't say openly in China". I too am curious how such a vague instruction would be interpreted as broadly as would be needed to block all politically sensitive subjects.

rvnx · 2 months ago
There is no difference to other countries. In France if you say bad things about certain groups of people then you can literally go to jail (but the censorship is directly IN the models)
pbhjpbhj · 2 months ago
If you consider that an LLM has a mathematical representation of how close any phrase is to "china politics" then avoidance of that should be relatively clear to comprehend. If I gave you a list and said 'these words are ranked by closeness to "Chinese politics"' you'd be able to easily check if words were on the list, I feel.

I suspect you could talk readily about something you think is not Chinese politics - your granny's ketchup recipe, say. (And hope that ketchup isn't some euphemism for the CCP, or Uighar murders or something.)

komali2 · 2 months ago
Now I wonder whether its vectors correctly associate Winnie the Pooh as "related to Chinese politics." There's many other bizarre related associations.
Cthulhu_ · 2 months ago
I'm sure ChatGPT and co have a decent enough grasp on what is not allowed in China, but also that the naive "prompt engineers" for this application don't actually know how to "program" it well enough. But that's the difference between a prompt engineer and a software developer, the latter will want to exhaust all options, be precise, whereas an LLM can handle a bit more vagueness.

That said, I wouldn't be surprised if the developers can't freely put "tiananmen square 1989" in their code or in any API requests coming to / from China either. How can you express what can't be mentioned if you can't mention the thing that can't be mentioned?

aspenmayer · 2 months ago
> How can you express what can't be mentioned if you can't mention the thing that can't be mentioned?

> The City & the City is a novel by British author China Miéville that follows a wide-reaching murder investigation in two cities that exist side by side, each of whose citizens are forbidden to go into or acknowledge the other city, combining weird fiction with the police procedural.

https://en.wikipedia.org/wiki/The_City_%26_the_City

Deleted Comment

wat10000 · 2 months ago
Ask yourself, why are they saying this? You can probably surmise that they're trying to avoid stirring up controversy and getting into some sort of trouble. Given that, which topics would cause troublesome controversy? Definitely contemporary Chinese politics, Chinese history is mostly OK, non-Chinese politics in Chinese language is fine.

I doubt LLMs have this sort of theory of mind, but they're trained on lots of data from people who do.

aspbee555 · 2 months ago
it is to ensure no discussion of Tiananmen square
yard2010 · 2 months ago
Why? What happened in Tiananmen square? Why shouldn't an LLM talk about it? Was it fashion? What was the reason?
landl0rd · 2 months ago
Just mentioning the CPC isn’t life-threatening, while talking about Xinjiang, Tiananmen Square, or cn’s common destiny vision the wrong way is. You also have to figure out how to prohibit mentioning those things without explicitly mentioning them, as knowledge of them implies seditious thoughts.

I’m guessing most LLMs are aware of this difference.

throwawayoldie · 2 months ago
No LLMs are aware of anything.
p1necone · 2 months ago
Their email responses all show telltale signs of AI too which is pretty funny.
paul-tharun · 2 months ago
I think it has to do with language barrier and translation