I was wondering why Windows is not on more ARM devices since Windows on ARM exists and Windows was ported manually to so many ARM devices now (Raspberry Pi for example). Turns out Qualcomm has an exclusivity deal for Windows on Arm licenses (which might soon expire).
Be careful. Down this path lies "the ISO is asking us to formally specify the mechanics of DOOM" and then the only resulting end game of that is "we got DOOM running on the ISO committee!"
Naturally I agree with you though. (Because we need DOOM running on more platforms~)
And I'd suggest Bad Apple also needs similar treatment.
> The ritual to compile, pack and flash super.img into the device is absurd.
I typically only do a full flash for the first build after a sync. Afterwards I just build the pieces I'm changing and use `adb sync` to push them to the device, skipping both the step that packs the image files and the flash. The `sync` target will build just the pieces needed for an `adb sync` if you don't know exactly what you need to build; I typically use it so I don't have to even think about which pieces I'm changing when rebuilding.
So typical flow goes something like:
```
// Rebase is only needed if I have existing local changes
> repo sync -j12 && repo rebase
// I don't actually use flashall, we have a tool internally that also handles bootloader upgrades, etc.
> m -j && fastboot flashall
Incremental compiles while working on native code are ~10 seconds with this method. Working on framework Java code can be a couple minutes still because of the need to run metalava.
New versions of Android aren't open-source until their stable release, so I don't know. I've been running these VMs on the stock ROM.
I don't feel like incremental AOSP builds are that slow, and I don't think it's changed much from Android 11 to 12. It's highly dependent on good hardware though, and it probably also helps that I flash individual partition images instead of building dist or target-files.
Now that Windows 11 has 32- and 64-bit x86 emulation this has the potential to do some interesting things in the long tail of the market.
I honestly wonder if there's a monetary opportunity here?
(This used to read a little differently (https://i.imgur.com/yp95XxR.png - thought it would be funny) but it quickly became apparent some editing was in order. This comment will likely remain stuck at the bottom of this subthread... woops.)
One of the primary uses-cases for VMs in Android seems to be a desire to replace the Trusted Execution Environments (TEEs) permitted by things like TrustZone. There have been several presentations about this by Google:
From the KVM Forum presentation: "We need a way to de-privilege third party code and provide a portable environment in which to isolate services from each other and also from the rest of Android."
From the LPC presentation:
"What do we need? We need a hypervisor that is:
1. open source
2. easy to ship and update
3. supports guest memory protection
4. trustworthy
KVM as part of GKI is a very good fit with the right extensions."
I think development of mobile OS heads into a completely wrong direction. The "curation" effect of proprietary stores is only good for censorship and mobile OS have the most aggressive applications that border on malware. At least you don't have the browser toolbar classical systems were infected with?
For productivity I don't want iron curtains between my applications or between application and system. Hell, it would actually be nice to be able to directly interface the hardware without all the loops.
We have a deeply flawed security model for mobile OS that relies on dogma that won't lead to additional safety. As safety issue is data exfiltration and by that standard a mobile devices are vastly more dangerous than the average work station. Comes with the nature of the device to a significant degree, but I think mobile OS have ridden of a cliff somewhere.
And now they want to push apps into a VM? Why not just buy a device per app at this point...
Anybody making money off of phone apps should not be trusted, they can and still take every piece of data they can to figure out how to monetize it. I want and need phone app security to protect me from the companies that want me as a product to be sold.
This has nothing to do with apps. It's instead about things like the biometrics or DRM handling code. The stuff that regularly runs on its own dedicated CPU or in specialized sandboxes like ARM's TrustZone. This is about moving that to instead be in a VM on the "primary" CPU, implemented in open source Linux kernel code instead of who-knows-what ARM microcode.
Or to compare against an ecosystem that has more love around here, imagine if everything that Apple runs on the T2 was instead run in a VM with an equivalent level of security. That's the goal.
Walled gardens have many problems, but one of the reason they exist is that they offer some security benefits. Virtualization allows for stronger containment of apps and make walled gardens less necessary.
What does this actually mean? Trustworthy under which circumstances, in what security model?
I'm terrified it means "unbreakable DRM in a way that is harder still for consumers to use or bypass" rather than "the code is not vulnerable to side-channel malware attacks". A lot of this could either be brilliant -- applications can't look at each other at all, unless one of a few trusted utilities such as screen-readers or keyboards, for example -- or utterly horrific from a user point of view. "Attestation" is mentioned in one of the end goals of that KVM Forum pdf, but it's unclear whether or not they mean a Qubes-like OS with the user as the hypervisor, or the user completely and utterly locked out of being the hypervisor, ever. One of these I think could be quite interesting -- especially without hardware attestation. One I think would be awful.
Part of the reason I (thoroughly break) Android's security model and run a rooted image is to have knowledge that I have control over everything in my device. It's important to me and I worry that these efforts will ultimately take that way from me.
Modern phones already have DRM running in TrustZone, even if you flash a custom Android build. pKVM is actually better in that regard as the DRM will be moved to a protected VM under Android's control, so you could choose to disable it on a custom OS.
Many of the comments are from the point of view treating the phone as a desktop to use directly with is not very nice experience.
I want to treat the "old" phone as a server: 8 cores , 6 GB Ram, UPS, plenty of storage(expandable), Low power consumption
, small form factor, redundant network 4/5G+Wifi, has built in screen and "keyboard" for time when you need to do debugging.
Does others experience with this kind of setup?
What services are you running ?
I did this a while back with a Pixel 2. I decided it wasn't worth the trouble.
Phones are not designed for continuous power draw (and consequent heat dissipation) - contant-use power dissipation limits are very low. The performance dips dramatically, and the constant high temperature kills the onboard flash prematurely. Same thing applies to the radios - wifi and cellular. Sustained data transfer on either of those interfaces causes issues - dropouts/disconnects/thermal reboots.
This is in addition to the fact that phones just don't have great I/O.
The heat isn't too hard to deal with, there are solutions (including the weird ones like water cooling cases https://www.ebay.com/itm/175003613243 ). Simply adding airflow tends to get you pretty far by itself, even. Device test labs are a common-enough thing and handle running phones like servers at scale without too much difficulty.
But there's more nasty lurking problems in this area, like that phones aren't designed to be continuously powered. They will naively try to keep the battery at or near 100% charge, which will destroy it relatively quickly, and not uncommonly in the "it's bulging and increasingly likely to burst into flames" variety. 2 years is considered a "decent run" for things like device test labs as a result (see eg the FAQ on https://github.com/DeviceFarmer/stf )
Maybe a solution to the problem would be a case that would provide cooling and would hide cables and adapters. The result could still be somewhat smaller than a NUC. Then you would only replace the phone once in a while and would have to cut a new screen cover. For better cooling depending on the phone you could remove the back cover.
The Pixel 2 era 820s-830s especially seemed fairly throttle heavy. There's better APIs for sustained power modes now, although I'm not sure VM would use them.
> Phones are not designed for continuous power draw (and consequent heat dissipation) - contant-use power dissipation limits are very low.
Maybe someone here can help me understand how power draw works on Android.
I have a ZTE Z959, a Cricket device. I use the phone to take photos with Open Camera every few seconds and stitch them together to make a video with ffmpeg. (Someone told me that YouTube has no practical limit on storage and I wanted to prove that they will cap me at some point but that is a topic for another day. Basically, the tl;dr here is for my casual use, YouTube has unlimited storage).
But I digress. The point is at some point the phone's battery started swelling up which became a fire hazard. I wanted to power the phone without battery. I have a Thinkpad 65W USB type-C charger. The first challenge was easy to work around. The phone just goes on a boot loop but if I add the battery and plug in the charger, the phone boots up ok. After the phone boots, I can remove the battery and the phone stays on (provided I don't do things like use flash, my guess is flash needs more power than my charger can provide.
Can someone shed more light into this process? How does all of this work? Does all of this mean my phone is technically running from battery even when it is connected to the wall?
I set up an old phone to run a torrent client (qbittorrent). It was a Debian 8 chroot [1] that I could ssh into.
In the end, it really wasn't worth the hassle. Networking ports would go unresponsive when the mobile CPU on the phone would go into "deep sleep". The battery began expanding after a few months.
CPU was more than capable to do the actual processing work. I/O with the SD card was passable.
I really wish someone would invent some sort of generic battery adapter that could transform any device requiring a battery into something that can run on direct power. I really adore that old Sony Ericsson Xperia android device.
I had a small Ruby (Sinatra) website running on a Linux vm on a spare Android phone (Sony Xperia X Compact). Turns out the CPU is quite capable. It actually compiles Nginx faster than a low end Google Cloud vm.
It sounds nothing, but looking at Nginx logs scrolling on a tiny phone screen is so unbelievable.
PostmarketOS gets you pretty close to this: boot alpine Linux, with more or less everything working. Even if the screen and/or GPU acceleration do not work on that phone, that would be plenty to replace a Raspberrypi.
I've been thinking about this quite seriously, and for now I see a few pain points:
- Some phones (looking at my daily driver, a Galaxy S4) do seem to allow disconnecting the battery from the charging circuit, which could lead to issues in the long run
- Ideally, you'd want to connect a USB hub over USB OTG (wired Ethernet, USB storage, real keyboard). I have yet to try charging the phone at the same time, although I think some cables enable this.
This is basically my vision of how to bring self-hosting to the masses.
Upcycle an old Android phone. Install apps for Nextcloud, Jellyfin, etc. Do a quick OAuth2 flow with your domain name provider to tunnel a subdomain directly to the app, and you have an end-to-end encrypted private cloud.
For this to work we need:
* Simpler domain name providers targeted at consumers instead of IT professionals. You shouldn't need to understand DNS records to use a domain.
* An open protocol for setting up tunnels[0].
* Nextcloud et al need to implement the protocol on their end. For open source projects 3rd parties can make wrappers.
I ran an odroids ( https://www.hardkernel.com/ ) as my home server some years back. It worked nicely for what it was, but in the end the lack of good storage options (I ended up attaching USB harddrives) was what doomed it. For what it's worth, a RasPi is exactly what you're describing, minus the screen/keyboard (but with much better cooling, which is what would be the bottleneck otherwise).
For that you shouldn't need virtualization though. With termux you can run lot of programs. There are termux packages for few server software. You should be able to compile some more yourself.
I remember being able to run a rootless debian chroot (using utility called proot) on Android 9 device, but things might have changed in meantime and I don't know if it's still straightforward.
If you're looking for serious performance, though, it may not be practical.
It would be best to create a kubernetes cluster of phones (reliability, scalability,...).
To run containers on Android you currently need to build a custom kernel (1), I hope this feature removes this need.
In terms of performance my expectations is to be comparable to the vm/vps from cloud providers where IOs are also limited, and with plenty of RAM for many workloads this will not be a big issues.
And a bunch of sensors. Things like GPS, accelerator, microphone, camera, are pretty much standard even on inexpensive smartphones going back half a decade.
I'm looking forward to the future we have envisioned for a long time, where you carry your main computer around with you and simply drop it into a standardized i/o dock when you sit at a desk and it runs the big high res displays and accepts keyboard/mouse input, and you use it like a phone the rest of the time. (Basically, a phone that is also a laptop replacement - I know there are dockable phones now that can run a monitor and kb/mouse, but they're not really "laptop replacement" level.)
I could even imagine such a system having two different CPUs (or, more likely, different cpu performance cores in a single package) that power up/down based on wired power availability, basically just automatically getting many times faster when connected to power and not having to conserve juice on battery. Storage and memory are already fairly low power these days, and tiny. Mobile (i.e. handheld) GPUs are now powerful/efficient enough to run high-res handheld displays with all day battery life, which while probably not 4k gaming level, are more than enough to run a multimonitor desktop setup when not gaming, especially if you make the quite safe assumption that they'll have wall power to crank up the GPU whenever asked to run external displays.
I'm really excited about mobile computing over the next ten years. The Nintendo Switch and M1 iPad Pro are little glimpses into this future. I look forward to replacing the dozen computers in my lab with a single handheld device that can simultaneously virtualize many of them and conveniently multiplex several big displays between them, and come with me in my pocket when I leave.
My experience with a Surface Book (which I use in at least 3 different ways - as a tablet, as a laptop, and connected to a monitor/keyboard/etc. on my desk via Surface Dock) makes me think that's coming eventually, but it'll take longer than you think:
- External GPUs are still pretty bad
- Tablets with cellular connectivity are if anything less well supported than before. I think this is mainly because carriers aren't really supporting the idea of one person/account having multiple "phones". Smartwatches have the same problem - I remember going to a Samsung store and they were showing off a smartwatch that had its own SIM card and cellular modem, but the staff couldn't tell me what kind of phone contract you had to have to make it work
- Small devices still mean a noticeable compromise in power. I've tried using Samsung Dex as well and it's... ok, but appreciably worse than even a netbook, even if on paper the processor/memory/etc. ought to be catching up. Laptop as primary computer only really took off once you genuinely couldn't notice any performance disadvantage compared to a desktop, at least in my friend group; I think it'll be the same with tablet and (eventually) phone/watch/ring form factors
> Smartwatches have the same problem - I remember going to a Samsung store and they were showing off a smartwatch that had its own SIM card and cellular modem, but the staff couldn't tell me what kind of phone contract you had to have to make it work
I might be missing something about cultural differences but... why would you expect an electronics store to tell you what kind of plans your telco has?
Here in Switzerland pretty much every telco offers an additional SIM for tablets and watches, but you need to talk to the mobile operator not the electronics store ^^
> Tablets with cellular connectivity are if anything less well supported than before.
Not my experience. I'm sure it depends a bit on your carrier, but on T-Mobile it was easy to order a data sim for a tablet from their website. Setting up a smart watch was even easier with eSIM, it just sets up everything for you when you pair with your phone.
What's the problem with eGPUs? The most problematic thing I see is that they only work with intel CPUs for now and that all enclosures are tb3.
Also Samsung is really good at adding features and then locking them down in weird ways, breaking basic features or just providing horrible user experience.
I’ve had this for a year and a half with a PinePhone. I just don’t use the docking ability that much because:
1) Forwarding X11 to a desktop is easier (and all the apps run on X anyway)
2) Since it’s just Linux all the apps can run on a desktop just like the phone, all my stuff is synced with git or rsync, and if there’s something I need outside the usual folders it’s just an scp away. There’s absolutely no reason to use one particular machine over the other other than form factor and computing resources.
It just makes a lot of sense to have a decent desktop permanently plugged in that you can use without any hassle. Devices like that will kill laptops though, I know it did for me. Of course I don’t think this will happen with Android. Google is way too greedy and they’ll find a way to make it unusable.
My laptop has more than 20x the memory of the PinePhone's maximum, and can run 3x external 4K displays, so the PinePhone would not even remotely approximate laptop-replacement level for me.
The hardware in mobile phones needs to improve substantially for the scenario I described to be practical; there's no hardware that supports it today, but the software in TFA is a step in the direction.
(As far as I'm aware, they still do this on Pixel 6.)
I already use my laptop as a desktop when I'm at home by connecting it to a USB-C hub, which in turn connects it to my monitor, keyboard, mouse, etc. I think the smartphone as a "single device which can be used for everything" is a cool concept and definitely possible considering how powerful modern smartphones are. The limitation is software.
This idea is cool, but I already have a seamless experience of moving to different devices through my day and having access to everything I need.
It is stuff like O365, Github, OneDrive, AWS etc that enable that. No plugging in, no reconfiguring devices. Moving between windows 11, android and debian everything I want is right there.
I can't see the advantage, yet, of trying to consoldate down to one device.
Windows phone used to have this and it was called Continuum. Although it couldn't actually run proper desktop apps, only a few office apps made for it. The promo page for it is still up despite windows phone being dead for a few years now.
Except that you will not be able to own a "main computer". It will be rented. And it also will be nothing more than a slightly less stupid terminal connecting into a cloud which offers you to rent the software you want to use, because you can't actually install your own thanks to everything being locked down behind VMs preventing the execution of unsigned code.
Want to execute your own code? Fine! Buy a license!
In a perfect world, it would work great. A world where hotels and workplaces have available phone docking stations that allow the travelling business people to plug their phones in, and their phones boot up an OS where we could have access to the full power of our personal computers on the provided monitors/keyboard/mouse.
The world doesn't exist. And it's not that much more expensive for the workplace to add the computer, and it's much more convenient to just carry our own ultraportable laptops so that we could work at coffee shops and on the planes.
But perhaps there is an use case out for parents with multiple kids who don't want to use low-powered or hand-me-down computers...
Probably yes, but we haven't had a decent product like that. Now if it can decently run a linux distro (with external display, etc) I might ditch my iPhone....
Even cheap commercial phones are far more powerful though. I have a pinephone because I want a linux native phone, but it's not really useable still as a phone; it works ok as a very low powered Linux system though.
Apple needs to open up is newish virtualization framework on iPadOS. The M1 iPad Pros with Magic Keyboard are great, but basically useless for development work. I want to use mine as a separate, stripped down environment to learn new technologies in. The only way I have found is to use it as an SSH terminal.
> The M1 iPad Pros with Magic Keyboard are great, but basically useless for development work.
I think that's the point.
There's a nontrivial % of Mac Mini / MacBook / iMac sales entirely because of the need to have Mac to publish anything, even PhoneGap/Cordova projects and Safari Extensions to the Apple App Store.
I already have a MBP for that. I want something with no/less windows and less distractions so that I can learn in a stripped down environment. Also, an iPad Pro+Magic Keyboard costs as much or more than many/most MacBook Airs.
I don't think apple is too afraid of cannibalizing the Mac with iPad.
The iPad Pros are on par price-wise with the Mac after you get keyboard cases etc, and on iOS they also get a huge cut of every app sold since you have to get them from the app store. They'd probably be thrilled if Macs were replaced with iPad. The can always jack up the price later.
They wouldn't have spent years neglecting the downfall of the Mac if they cared so much about that revenue.
I'm not sure why you'd want to use it as a development device when there is a not much heavier MacBook Air available with the same brain in it. I actually ran from September 2021 to January 2022 with just my iPad Pro as a computer for all personal tasks, which include programming, and decided that I was artificially limiting what I was doing for the sake of a minimalist ideal. iOS is just not the right tool for the job.
Each Apple device has a very nice overlapping niche and a lot of consistency between them but some devices are intentionally not designed to do some tasks. iOS is fine for non power user tasks and simple automation but nothing more. For 80% of what I do that is fine so I usually go to the iPad first always. But if I want to sit down and do full on keyboard based productivity it's the MBP every time.
The iPad Pro has a very special place in my heart though. It's the most reliable and efficient machine and with the Apple Pencil it's a game changer. I love to take it with me when I go out for a weekend and will sit in a hotel, do spreadsheet, organise tasks, do some drawing, watch some streams, casual messaging and emails and even video and photo editing. But not programming!
That’s a circular explanation. The iPad isn’t suitable for programming because it isn’t suitable for programming. The limitation is completely arbitrary because Apple thinks they can tell users what they should use their devices for. It’s “Your holding it wrong” applied to software.
I don’t disagree with you, I have a new iPad 13 Pro and a three year old iPad 11 Pro that would be great to have better dev tools on. Pythonista and Juno are fairly good for Python development, LispPad is a nice Scheme dev environment, Raskell used to be pretty good for Haskell but no longer runs or is supported, etc.
I have a new M1 MacBook Pro with a large monitor for programming, but since I usually write using an iPad, having first class development tools for all popular langauges would be very nice.
You could run linux before. With qemu. It was a crappy experience. Even writing on a touchscreen is bad and when the window is tiny and you barely see the letters is even worse. Of course you can hook a wireless keyboard and mouse but then better go to the PC which has even a bigger screen.
I really don't see what this brings. Is google so lost that the only "innovation" they can bring in android is descovering that the linux kernel has support for virtualization ?
> I really don't see what this brings. Is google so lost that the only "innovation" they can bring in android is descovering that the linux kernel has support for virtualization ?
When talking about the strategy of a successful multibillion dollar company, the most likely answer is "no".
The very short article explains at a high level:
"they’re used for things like enhancing the security of the kernel (or at least trying to) and running miscellaneous code (such as third-party code for DRM, cryptography, and other closed-source binaries) outside of the Android OS."
Forgive me for regurgitating trendy HN counter arguments, but: this comment seems very similar to the early dismissals of Dropbox.
"I can already achieve this with FTP/samba/whatever." Sometimes taking existing, established technology and making it easier to use is all it takes to "innovate".
Of course I have no idea how killer this particular Android feature will be. I'm just criticizing the "this is not new" argument.
For every Dropbox, there are a thousand companies and products that similar criticisms could be made against that don't actually succeed, sometimes for the reasons stated in those criticisms.
As an aside: this is exactly what tech bros do not get about the AirTags when we try to get the message across about how bad they are for stalking. Every time I try to bring this up, I am told that Tile, GPS trackers, Samsung I-do-not-even-know-what already exists. Who cares -- the AirTags work, alas they work exceptionally well (precision and battery life wise especially), easily accessible, their existence easily discoverable. They are devastating in this field. Problem is, vulnerable people will use public transit where it's practically guaranteed within a ten meter radius there will be an iPhone, especially so in North America where the iPhone is so prevalent. Meanwhile, the victim might not have an iPhone, just some scuffed Android 'cos that's all they can afford. That Apple released (or didn't recall) them is a blow, that Congress didn't ban them is not really because it's not like we expect Congress to be functional, to protect poor people especially women. But watch a stalking case involving one going bad (= rape, murder) in, say, Germany and then watch the EU bringing down the banhammmer.
Dropbox was a different tech from FTP/Samba/whatever.
This one, it's not the first VM on smartphones at all. Running a desktop OS is possible for years with comparable solutions. What makes this different from Samsung's DeX, for example?
You act like Dropbox is successful when they are not. A company paying other companies to advertise and push their products and burning through cash to appear successful is not success.
Well, imagine your next work laptop to be a phone. It connects wirelessly to a display, which is paired to a keyboard and mouse.
Still not seeing the point?
People are either gonna love or hate it. Love because of how little space it requires or hate because the performance is gonna be worse then even the non pro versions of the surface.
> It connects wirelessly to a display, which is paired to a keyboard and mouse.
When you add the display, battery, keyboard and mouse, it's really cheap to add brains and storage and build a full laptop that can share data with your phone.
This is why most lapdocks fail - they aren't that much less expensive than a cheap laptop. This is also why, at some point, nobody was making dedicated terminals for large computers - they were as expensive to build as PCs.
Why use a virtualized Linux on proprietary Android if you can use native Linux phone with all free drivers and desktop OS? https://puri.sm/products/librem-5
That is what I want. In addition to a docking station in my office, I would like iPhone <—> iTV style interop in the living room, and compatible kiosk style support in public areas like airports, airlines, office buildings, etc.
Think bigger.
Big tablets with detachable keyboards that can now run Android and Windows.
Also there are Android based VR headsets, anh their resolutions are getting better. Think of working in a connected virtual office, running Windows applications.
I kinda want my phone to be a portable PC in my pocket. At least I think I do... If I could dock my phone in at work and have it boot up a Windows VM (I work on a Windows PC) that would be neat.
I know there are some options for this, like Samsung Dex, but with this there is at least some potential for having a Windows PC in your pocket. Like Microsoft tried to do with those older Windows phones.
We are probably focusing on the wrong use case here.
Yes this virtualization allows us to run windows/linux. Thats not the main goal probably. Its more to reuse packages from those stacks on your android phone, kinda like the VMware Fusion mode on a Mac, to run applications side by side, or to run things in a secure virtualized container.
Why recompile to android, when you can virtualize?
You could just chroot into normal GNU based user spaces before they screwed it up between all the namespace stuff and then later restricting exec. I even ran X11 apps on my Android a long time ago with zero virtualization overhead.
> Even writing on a touchscreen is bad and when the window is tiny and you barely see the letters is even worse.
We need to bring back gestural writing, with simplified letter forms. The basic tech was in production use in the mid-1990s, and there are clearly unencumbered alphabets that could be easily used for this, such as the 19th century Moon Script. Recent UI work has made Linux quite usable on touchscreens and smaller devices, but text input is way harder than it could be.
Linux has had KVM, sure, but mobile ARM CPUs have not had the necessary virtualization extensions for “native” speed hypervisors up until very recently.
> they’re used for things like enhancing the security of the kernel (or at least trying to) and running miscellaneous code (such as third-party code for DRM, cryptography, and other closed-source binaries) outside of the Android OS
long shot here, does this make it more possible for production releases to be closer to AOSP and then run the rest on the hypervisor ? Also the future of project tremble, meaning better upgrade paths for all devices (outside of manufacturer will which is still the main issue) ?
Booting, logging in, simple usage: https://twitter.com/kdrag0n/status/1493088558676017152
Playing Doom (via x86 emulation): https://twitter.com/kdrag0n/status/1493089098944237568
And Linux:
Booting various distros: https://twitter.com/kdrag0n/status/1492832966640222210
Compiling Linux 5.17-rc3 allnoconfig for arm64, on Arch: https://twitter.com/kdrag0n/status/1492833078410047488
https://www.theverge.com/2021/11/23/22798231/microsoft-qualc...
Because I swear Does It Run Doom? is becoming a requirement checkbox for any new project.
Naturally I agree with you though. (Because we need DOOM running on more platforms~)
And I'd suggest Bad Apple also needs similar treatment.
I am also excited to see what booting multiple OSes means for the ecosystem around phh's (treble) builds, too.
I'm still working with Android 11 and compile times are driving me insane. The ritual to compile, pack and flash super.img into the device is absurd.
Do you know if there is any improvement on that side?
I typically only do a full flash for the first build after a sync. Afterwards I just build the pieces I'm changing and use `adb sync` to push them to the device, skipping both the step that packs the image files and the flash. The `sync` target will build just the pieces needed for an `adb sync` if you don't know exactly what you need to build; I typically use it so I don't have to even think about which pieces I'm changing when rebuilding.
So typical flow goes something like:
``` // Rebase is only needed if I have existing local changes > repo sync -j12 && repo rebase
// I don't actually use flashall, we have a tool internally that also handles bootloader upgrades, etc. > m -j && fastboot flashall
// Hack hack hack... then: > m -j sync && syncrestart ```
Where `syncrestart` is an alias for:
``` syncrestart () { adb remount && adb shell stop && sleep 3 && adb sync && adb shell start } ```
Incremental compiles while working on native code are ~10 seconds with this method. Working on framework Java code can be a couple minutes still because of the need to run metalava.
I don't feel like incremental AOSP builds are that slow, and I don't think it's changed much from Android 11 to 12. It's highly dependent on good hardware though, and it probably also helps that I flash individual partition images instead of building dist or target-files.
I honestly wonder if there's a monetary opportunity here?
(This used to read a little differently (https://i.imgur.com/yp95XxR.png - thought it would be funny) but it quickly became apparent some editing was in order. This comment will likely remain stuck at the bottom of this subthread... woops.)
- LPC 2020: https://www.youtube.com/watch?v=54q6RzS9BpQ&t=10862s and https://lpc.events/event/7/contributions/780/
- KVM Forum 2020: https://mirrors.edge.kernel.org/pub/linux/kernel/people/will...
From the KVM Forum presentation: "We need a way to de-privilege third party code and provide a portable environment in which to isolate services from each other and also from the rest of Android."
From the LPC presentation:
"What do we need? We need a hypervisor that is:
1. open source
2. easy to ship and update
3. supports guest memory protection
4. trustworthy
KVM as part of GKI is a very good fit with the right extensions."
For productivity I don't want iron curtains between my applications or between application and system. Hell, it would actually be nice to be able to directly interface the hardware without all the loops.
We have a deeply flawed security model for mobile OS that relies on dogma that won't lead to additional safety. As safety issue is data exfiltration and by that standard a mobile devices are vastly more dangerous than the average work station. Comes with the nature of the device to a significant degree, but I think mobile OS have ridden of a cliff somewhere.
And now they want to push apps into a VM? Why not just buy a device per app at this point...
Or to compare against an ecosystem that has more love around here, imagine if everything that Apple runs on the T2 was instead run in a VM with an equivalent level of security. That's the goal.
Recently, they've also focused on using VMs to isolate parts of the system and apps: https://twitter.com/salt___doll/status/1492872311652765700
Mishaal Rahman has written a detailed post about virtualization on Android: https://blog.esper.io/android-dessert-bites-5-virtualization...
What does this actually mean? Trustworthy under which circumstances, in what security model?
I'm terrified it means "unbreakable DRM in a way that is harder still for consumers to use or bypass" rather than "the code is not vulnerable to side-channel malware attacks". A lot of this could either be brilliant -- applications can't look at each other at all, unless one of a few trusted utilities such as screen-readers or keyboards, for example -- or utterly horrific from a user point of view. "Attestation" is mentioned in one of the end goals of that KVM Forum pdf, but it's unclear whether or not they mean a Qubes-like OS with the user as the hypervisor, or the user completely and utterly locked out of being the hypervisor, ever. One of these I think could be quite interesting -- especially without hardware attestation. One I think would be awful.
Part of the reason I (thoroughly break) Android's security model and run a rooted image is to have knowledge that I have control over everything in my device. It's important to me and I worry that these efforts will ultimately take that way from me.
I want to treat the "old" phone as a server: 8 cores , 6 GB Ram, UPS, plenty of storage(expandable), Low power consumption , small form factor, redundant network 4/5G+Wifi, has built in screen and "keyboard" for time when you need to do debugging.
Does others experience with this kind of setup? What services are you running ?
Phones are not designed for continuous power draw (and consequent heat dissipation) - contant-use power dissipation limits are very low. The performance dips dramatically, and the constant high temperature kills the onboard flash prematurely. Same thing applies to the radios - wifi and cellular. Sustained data transfer on either of those interfaces causes issues - dropouts/disconnects/thermal reboots.
This is in addition to the fact that phones just don't have great I/O.
But there's more nasty lurking problems in this area, like that phones aren't designed to be continuously powered. They will naively try to keep the battery at or near 100% charge, which will destroy it relatively quickly, and not uncommonly in the "it's bulging and increasingly likely to burst into flames" variety. 2 years is considered a "decent run" for things like device test labs as a result (see eg the FAQ on https://github.com/DeviceFarmer/stf )
Maybe someone here can help me understand how power draw works on Android.
I have a ZTE Z959, a Cricket device. I use the phone to take photos with Open Camera every few seconds and stitch them together to make a video with ffmpeg. (Someone told me that YouTube has no practical limit on storage and I wanted to prove that they will cap me at some point but that is a topic for another day. Basically, the tl;dr here is for my casual use, YouTube has unlimited storage).
But I digress. The point is at some point the phone's battery started swelling up which became a fire hazard. I wanted to power the phone without battery. I have a Thinkpad 65W USB type-C charger. The first challenge was easy to work around. The phone just goes on a boot loop but if I add the battery and plug in the charger, the phone boots up ok. After the phone boots, I can remove the battery and the phone stays on (provided I don't do things like use flash, my guess is flash needs more power than my charger can provide.
Can someone shed more light into this process? How does all of this work? Does all of this mean my phone is technically running from battery even when it is connected to the wall?
In the end, it really wasn't worth the hassle. Networking ports would go unresponsive when the mobile CPU on the phone would go into "deep sleep". The battery began expanding after a few months.
CPU was more than capable to do the actual processing work. I/O with the SD card was passable.
I really wish someone would invent some sort of generic battery adapter that could transform any device requiring a battery into something that can run on direct power. I really adore that old Sony Ericsson Xperia android device.
[1] https://wiki.debian.org/ChrootOnAndroid
It sounds nothing, but looking at Nginx logs scrolling on a tiny phone screen is so unbelievable.
I've been thinking about this quite seriously, and for now I see a few pain points:
- Some phones (looking at my daily driver, a Galaxy S4) do seem to allow disconnecting the battery from the charging circuit, which could lead to issues in the long run
- Ideally, you'd want to connect a USB hub over USB OTG (wired Ethernet, USB storage, real keyboard). I have yet to try charging the phone at the same time, although I think some cables enable this.
Upcycle an old Android phone. Install apps for Nextcloud, Jellyfin, etc. Do a quick OAuth2 flow with your domain name provider to tunnel a subdomain directly to the app, and you have an end-to-end encrypted private cloud.
For this to work we need:
* Simpler domain name providers targeted at consumers instead of IT professionals. You shouldn't need to understand DNS records to use a domain.
* An open protocol for setting up tunnels[0].
* Nextcloud et al need to implement the protocol on their end. For open source projects 3rd parties can make wrappers.
[0]: https://forum.indiebits.io/t/toward-an-open-tunneling-protoc...
I remember being able to run a rootless debian chroot (using utility called proot) on Android 9 device, but things might have changed in meantime and I don't know if it's still straightforward.
If you're looking for serious performance, though, it may not be practical.
[0] https://github.com/termux/termux-app/issues/2155
(1)https://gist.github.com/FreddieOliveira/efe850df7ff3951cb62d...
Even ignoring USB 3, most servers should work fine when capped to 100 or 200Mbps.
I could even imagine such a system having two different CPUs (or, more likely, different cpu performance cores in a single package) that power up/down based on wired power availability, basically just automatically getting many times faster when connected to power and not having to conserve juice on battery. Storage and memory are already fairly low power these days, and tiny. Mobile (i.e. handheld) GPUs are now powerful/efficient enough to run high-res handheld displays with all day battery life, which while probably not 4k gaming level, are more than enough to run a multimonitor desktop setup when not gaming, especially if you make the quite safe assumption that they'll have wall power to crank up the GPU whenever asked to run external displays.
I'm really excited about mobile computing over the next ten years. The Nintendo Switch and M1 iPad Pro are little glimpses into this future. I look forward to replacing the dozen computers in my lab with a single handheld device that can simultaneously virtualize many of them and conveniently multiplex several big displays between them, and come with me in my pocket when I leave.
- External GPUs are still pretty bad
- Tablets with cellular connectivity are if anything less well supported than before. I think this is mainly because carriers aren't really supporting the idea of one person/account having multiple "phones". Smartwatches have the same problem - I remember going to a Samsung store and they were showing off a smartwatch that had its own SIM card and cellular modem, but the staff couldn't tell me what kind of phone contract you had to have to make it work
- Small devices still mean a noticeable compromise in power. I've tried using Samsung Dex as well and it's... ok, but appreciably worse than even a netbook, even if on paper the processor/memory/etc. ought to be catching up. Laptop as primary computer only really took off once you genuinely couldn't notice any performance disadvantage compared to a desktop, at least in my friend group; I think it'll be the same with tablet and (eventually) phone/watch/ring form factors
I might be missing something about cultural differences but... why would you expect an electronics store to tell you what kind of plans your telco has?
Here in Switzerland pretty much every telco offers an additional SIM for tablets and watches, but you need to talk to the mobile operator not the electronics store ^^
Not my experience. I'm sure it depends a bit on your carrier, but on T-Mobile it was easy to order a data sim for a tablet from their website. Setting up a smart watch was even easier with eSIM, it just sets up everything for you when you pair with your phone.
Also Samsung is really good at adding features and then locking them down in weird ways, breaking basic features or just providing horrible user experience.
1) Forwarding X11 to a desktop is easier (and all the apps run on X anyway)
2) Since it’s just Linux all the apps can run on a desktop just like the phone, all my stuff is synced with git or rsync, and if there’s something I need outside the usual folders it’s just an scp away. There’s absolutely no reason to use one particular machine over the other other than form factor and computing resources.
It just makes a lot of sense to have a decent desktop permanently plugged in that you can use without any hassle. Devices like that will kill laptops though, I know it did for me. Of course I don’t think this will happen with Android. Google is way too greedy and they’ll find a way to make it unusable.
The hardware in mobile phones needs to improve substantially for the scenario I described to be practical; there's no hardware that supports it today, but the software in TFA is a step in the direction.
(As far as I'm aware, they still do this on Pixel 6.)
I already use my laptop as a desktop when I'm at home by connecting it to a USB-C hub, which in turn connects it to my monitor, keyboard, mouse, etc. I think the smartphone as a "single device which can be used for everything" is a cool concept and definitely possible considering how powerful modern smartphones are. The limitation is software.
It is stuff like O365, Github, OneDrive, AWS etc that enable that. No plugging in, no reconfiguring devices. Moving between windows 11, android and debian everything I want is right there.
I can't see the advantage, yet, of trying to consoldate down to one device.
https://www.microsoft.com/en-us/windows/continuum
Want to execute your own code? Fine! Buy a license!
That's what you're looking forward to.
So it is "the future" in the same way 3D TVs were: Much hype and then kinda neat, but not great.
The world doesn't exist. And it's not that much more expensive for the workplace to add the computer, and it's much more convenient to just carry our own ultraportable laptops so that we could work at coffee shops and on the planes.
But perhaps there is an use case out for parents with multiple kids who don't want to use low-powered or hand-me-down computers...
The hardware was nice though.
1.) Run windows/linux distro on the go.
2.) Can play graphically demanding titles in 720P.
3.) Has a dock which supports ethernet, Mouse/Keebs, HDMI out.
4.) Can have upto 4TB internal SSD with tweaks.
I think that's the point.
There's a nontrivial % of Mac Mini / MacBook / iMac sales entirely because of the need to have Mac to publish anything, even PhoneGap/Cordova projects and Safari Extensions to the Apple App Store.
The iPad Pros are on par price-wise with the Mac after you get keyboard cases etc, and on iOS they also get a huge cut of every app sold since you have to get them from the app store. They'd probably be thrilled if Macs were replaced with iPad. The can always jack up the price later.
They wouldn't have spent years neglecting the downfall of the Mac if they cared so much about that revenue.
Each Apple device has a very nice overlapping niche and a lot of consistency between them but some devices are intentionally not designed to do some tasks. iOS is fine for non power user tasks and simple automation but nothing more. For 80% of what I do that is fine so I usually go to the iPad first always. But if I want to sit down and do full on keyboard based productivity it's the MBP every time.
The iPad Pro has a very special place in my heart though. It's the most reliable and efficient machine and with the Apple Pencil it's a game changer. I love to take it with me when I go out for a weekend and will sit in a hotel, do spreadsheet, organise tasks, do some drawing, watch some streams, casual messaging and emails and even video and photo editing. But not programming!
Naturally it is somehow still a PhD topic, but imagine using Swift playgrounds with the pencil as if it was paper notebook.
I have a new M1 MacBook Pro with a large monitor for programming, but since I usually write using an iPad, having first class development tools for all popular langauges would be very nice.
I really don't see what this brings. Is google so lost that the only "innovation" they can bring in android is descovering that the linux kernel has support for virtualization ?
When talking about the strategy of a successful multibillion dollar company, the most likely answer is "no".
The very short article explains at a high level:
"they’re used for things like enhancing the security of the kernel (or at least trying to) and running miscellaneous code (such as third-party code for DRM, cryptography, and other closed-source binaries) outside of the Android OS."
First Google is gonna run Fuchsia on Linux, then linux will be removed entirely.
that's what this laying the groundwork for.
"I can already achieve this with FTP/samba/whatever." Sometimes taking existing, established technology and making it easier to use is all it takes to "innovate".
Of course I have no idea how killer this particular Android feature will be. I'm just criticizing the "this is not new" argument.
This one, it's not the first VM on smartphones at all. Running a desktop OS is possible for years with comparable solutions. What makes this different from Samsung's DeX, for example?
Still not seeing the point?
People are either gonna love or hate it. Love because of how little space it requires or hate because the performance is gonna be worse then even the non pro versions of the surface.
When you add the display, battery, keyboard and mouse, it's really cheap to add brains and storage and build a full laptop that can share data with your phone.
This is why most lapdocks fail - they aren't that much less expensive than a cheap laptop. This is also why, at some point, nobody was making dedicated terminals for large computers - they were as expensive to build as PCs.
I wonder when we'll be there. For sure we're not there for wireless, yet. It's unreliable, especially if you have a lot of wireless devices around.
Deleted Comment
That's the real killer feature.
https://blog.esper.io/android-dessert-bites-5-virtualization...
Also there are Android based VR headsets, anh their resolutions are getting better. Think of working in a connected virtual office, running Windows applications.
I know there are some options for this, like Samsung Dex, but with this there is at least some potential for having a Windows PC in your pocket. Like Microsoft tried to do with those older Windows phones.
Yes this virtualization allows us to run windows/linux. Thats not the main goal probably. Its more to reuse packages from those stacks on your android phone, kinda like the VMware Fusion mode on a Mac, to run applications side by side, or to run things in a secure virtualized container.
Why recompile to android, when you can virtualize?
Performance?
We need to bring back gestural writing, with simplified letter forms. The basic tech was in production use in the mid-1990s, and there are clearly unencumbered alphabets that could be easily used for this, such as the 19th century Moon Script. Recent UI work has made Linux quite usable on touchscreens and smaller devices, but text input is way harder than it could be.
https://arstechnica.com/gadgets/2021/11/android-12-the-ars-t...
Private Compute Core—Running AI code in a virtual machine?
long shot here, does this make it more possible for production releases to be closer to AOSP and then run the rest on the hypervisor ? Also the future of project tremble, meaning better upgrade paths for all devices (outside of manufacturer will which is still the main issue) ?