My concern with this system is that their proxy is (afaik) compatible with Google's format, which by default is less privacy respecting as it does the location calculation server side and doesn't allow the client to cache.
I'd much prefer if they called out to Apple's servers directly (or through a direct proxy) & cached the AP data locally so over time it will work offline.
The data is cached for roughly 15 minutes. https://grapheneos.org/features#:~:text=It%20caches%20the%20...
GrapheneOS plans to scrape Apple's database and make it downloadable, so Wi-fi positioning could be done fully locally.
I'm trying to understand why rooting Android is such a sin.
If I give root to my terminal so I can browse and edit any files I want, I'm placing a lot of trust in the terminal, sure. But trusting the terminal seems reasonable, as it's an important (basic; fundamental; necessary) part of any "real" OS. If I don't trust the terminal to not be malicious, why should I trust my OS? Anything could be compromised from a supply-chain attack. If we don't trust anything, we can turn off the computer and have perfect security, but if we accept that there's a trade-off between security and usability, we have to place some trust in some parts of the system.
> It does not matter if you have to whitelist apps that have root — an attacker can fake user input by, for example, clickjacking, or they can exploit vulnerabilities in apps that you have granted root to. Rooting turns huge portions of the operating system into root attack surface; vulnerabilities in the UI layer — such as in the display server, among other things — can now be abused to gain complete root access.
So if some app can somehow exploit the display server, it can inject commands on the terminal and hide the real output? I know the X server on Linux has (or has had) major security issues [1] that don't provide any real GUI isolation. Is that the type of issues Madaidan is talking about?
I don't know much about Android's display server, but if it's possible for an app without root access to exploit it, couldn't that app inject touch events or keystrokes in another app, or read the other app's screen? How would not having root benefit me if a random can view or control other apps without my knowledge by exploiting the display server? [2]
From what I gather if an app with root access has vulnerabilities, it makes it easier for another app (or other type of malicious code) to use it to gain root. But if the UI layer, to use Madaidan's example, has a vulnerability, it seems like it could be exploited successfully, with awful consequences, even if the malicious code doesn't get root in the end. So if I choose several apps to give root access to, I would just extend the attack surface from {all of the OS and its various layers} to {all of the OS and its various layers and those several apps}.
> root fundamentally breaks verified boot and other security features by placing excessive trust in persistent state.
I don't understand this. Could someone explain it with more details to me, please?
[1] https://theinvisiblethings.blogspot.com/2011/04/linux-securi...
[2] https://xkcd.com/1200/
- Discretionary Access Control, i.e. the standard Unix file permissions
- Mandatory Access Control, implemented in the form of the SELinux and YAMA LSMs (GrapheneOS stopped using YAMA in the 2024031400 release and replaced it with advanced SELinux policies)
- Android permissions which have to be disclosed in the AndroidManifest.xml, and most of the time need to be granted by the user at runtime
Root simply bypasses ALL of these security mechanisms. This is a clear violation of the principle of least privilege, since most of the stuff you are doing with root probably doesn't require access to your entire filesystem, and could easily run within an SELinux context. But writing and deploying a modified SELinux policy would take extra time and effort, and devs are lazy, so they just use root to completely bypass it.
As madaidan points out, only a tiny subset of system processes on Android run as root. [3] And Android has clear guidelines about what root process are and aren't allowed to do. From the AOSP documentation:
> Where possible, root code should be isolated from untrusted data and accessed via IPC.
> Root processes must not listen on a network socket.
> Root processes must not provide a general-purpose runtime for apps (for example, a Java VM).
Desktop systems are very different from Android and iOS. Out of Android's three major security mechanisms, they typically only implement one. This is why ransomware is so insanely successful. Every program has access to all the files and folders of the logged in user, including network shares, etc. Even on systems that implement application sandboxing and a permission system, such as macOS, it's only an afterthought, and isn't enforced properly. (macOS is still miles ahead of Windows and Linux though) For example, when installing a 3rd-party terminal emulator such as iTerm2 on macOS, you have to grant it the permission to access your entire file system (otherwise you will be limited to the home directory IIRC). But this permission also applies recursively to every process started within the terminal, greatly limiting its usefulness.
> I don't understand this. Could someone explain it with more details to me, please?
Android uses Verified Boot to protect against both Evil maid attacks [4], i.e. someone modifying the operating system on the hard drive, and malware persistence. By default, the Android /system partition is mounted in read-only mode, unlike for example your C:\Windows directory, or system directories like /bin on Linux. This prevents malware from modifying the operating system. If you ever get malware on Android or iOS, in most cases you can get rid of it, by simply rebooting your device. Unless of course, the malware has some persistence mechanism. Root obviously provides a great vector for persistence, since the system partition could simply be remounted in a writable mode, and the system could be modified however the attacker wants to.
When you build your own copy of AOSP or GrapheneOS, include your modifications, and sign the image with your own Verified Boot keys, that image can't be modified or tampered with by an attacker. It's perfectly secure to do that (of course only if you can trust the extra code you're including).
[1] https://source.android.com/docs/security/app-sandbox#protect...
[2] https://arxiv.org/pdf/1904.05572
[3] https://source.android.com/docs/security/overview/implement#...
[4] https://en.wikipedia.org/wiki/Evil_maid_attack