So what they did is add a new fake tenant with a premium 1-900 number and used the intercom to call it, earning themseleves a bit of cash. Naturally, landlords had to foot the bill.
Google with all its money and smart engineers was not able to build a simple chat application.
I don’t think that not being able to communicate your results makes it not scientific.
I was willing to believe that maybe it was just a massive NSA-scale research team up until the part with a custom hash function sbox. Apple appears to have known that the feature in question was dangerous and deliberately both hidden it, whatever it is, and then gone further and protected it with a sort of (fairly weak) digital signing feature.
As the blog post points out, there's no obvious way you could find the right magic knock to operate this feature short of doing a full silicon teardown and reverse engineering (impractical at these nodes). That leaves hacking the developers to steal their internal documentation.
The way it uses a long chain of high effort zero days only to launch an invisible Safari that then starts from scratch, loading a web page that uses a completely different chain of exploits to re-hack the device, also is indicative of a massive organization with truly abysmal levels of internal siloing.
Given that the researchers in question are Russians at Kaspersky, this pretty much has to be the work of the NSA or maybe GCHQ.
Edit: misc other interesting bits from the talk: the malware can enable ad tracking, and also can detect cloud iPhone service hosting that's often used by security researchers. The iOS/macOS malware platform seems to have been in development for over a decade and actually does ML on the device to do object recognition and OCR on photos on-device, to avoid uploading image bytes: they only upload ML generated labels. They truly went to a lot of effort, but all that was no match for a bunch of smart Russian students.
I'm not sure I agree with the speaker that security through obscurity doesn't work, however. This platform has been in the wild for ten years and nobody knows how long they've been exploiting this hidden hardware "feature". If the hardware feature was openly documented it'd have been found much, much sooner.
Or a joint project between several organizations.
As an example, take the wolf, goat, and cabbage problem, but change the names of the animals and the situation so that the axioms are intact, but the situation no longer seems similar to the classic problem, and therefore has no representation in the training data. GPT-4 can no longer solve this problem consistently.
If Q* can now solve problems like this reliably, it could represent a breakthrough in LLM’s ability to model the world and extrapolate beyond the training data.
ChatGPT: To solve this riddle, you must ensure that no animal gets eaten while you are transferring them across the river. Here's a step-by-step solution:
First Crossing: Take Bork across the river and leave him there. Return alone to the original side. At this point, Aork and Cork are on one side, and Bork is on the other. Bork cannot eat Cork because you are with Cork.
Second Crossing: Take Aork across the river. Now, Aork and Bork are on the far side, and Cork is on the original side. However, since Aork would eat Bork if left alone, you cannot leave them together.
Third Crossing: Take Bork back with you to the original side. Leave Bork there and take Cork with you across the river. Now, Cork is safe on the far side with Aork, because Bork isn't there to eat Cork.
Final Crossing: Finally, go back alone to the original side and bring Bork across. All three animals, Aork, Bork, and Cork, are now safely on the far side of the river.
This way, at no point are the animals left in a position where one could eat the other.
I think it does fine?