What I could see: make Play Store and Play Services uninstallable like any other app.
I think you're mixing up OOTFs and EOTFs here. sRGB or HLG can refer to either the stored gamma, but more often means the EOTF "reversed" gamma that is used to display an image. When we refer to "log", this is almost always means a camera gamma - an OOTF. So the reason it's "in the opposite direction" is that it's designed to efficiently utilize bits for storing image data, whereas the EOTF is designed to reverse this storage gamma for display purposes.
As you can see from the graph in [1], Sony's S-Log does indeed allocate more bits to dark areas than bright areas. (Though the shape of the curve becomes more complicated if you take into account the non-linear behavior of light in human vision.)
Wouldn't this be the OETF? OOTF would include the EOTF, which is typically applied on the display side (as you noted).
I'm still working with Android 11 and compile times are driving me insane. The ritual to compile, pack and flash super.img into the device is absurd.
Do you know if there is any improvement on that side?
I typically only do a full flash for the first build after a sync. Afterwards I just build the pieces I'm changing and use `adb sync` to push them to the device, skipping both the step that packs the image files and the flash. The `sync` target will build just the pieces needed for an `adb sync` if you don't know exactly what you need to build; I typically use it so I don't have to even think about which pieces I'm changing when rebuilding.
So typical flow goes something like:
``` // Rebase is only needed if I have existing local changes > repo sync -j12 && repo rebase
// I don't actually use flashall, we have a tool internally that also handles bootloader upgrades, etc. > m -j && fastboot flashall
// Hack hack hack... then: > m -j sync && syncrestart ```
Where `syncrestart` is an alias for:
``` syncrestart () { adb remount && adb shell stop && sleep 3 && adb sync && adb shell start } ```
Incremental compiles while working on native code are ~10 seconds with this method. Working on framework Java code can be a couple minutes still because of the need to run metalava.
Strangely, I'm struggling to write this comment in a way that doesn't sound trolling...sorry, I don't mean trolling at all. If you could see my facial expression it would be easier...
I work on displays within an OS team. Having some basic understanding of colour theory is critical for a significant number of modern display projects, particularly for the high end. For example, enabling colour accurate rendering (games, photos, etc), shipping wide-gamut displays (how do you render existing content on a WCG display?), etc. More specifically to the planckian locus, it generally comes up when deciding which white point to calibrate a given display to at the factory (e.g. iPhone is 6470K, S20 is 7020K in Vivid)[1][2] and if you're doing any sort of chromatic white point adaptation, like Apple's True Tone[1][2].
My background before joining the team was a degree in math, but I really enjoyed doing low level projects in my spare time, so ended up on an OS team. We also have colour scientists who study this full time and have a _significantly_ better understanding of it all than I do :)
[1]: https://www.displaymate.com/iPhone_13Pro_ShootOut_1M.htm#Whi... [2]: https://www.displaymate.com/Galaxy_S20_ShootOut_1U.htm#White... [3]: https://support.apple.com/en-gb/HT208909 [4]: http://yuhaozhu.com/blog/chromatic-adaptation.html
Is there not a better way to do this? There must be. Why not copy 10 frames at a time? Is the device decoder buffer that small? Is there no native Android support for pointing a device decode buffer to a data ingress source and having the OS fill it when it empties, without having to "poll" every 15ms? So many questions.
A couple reasons this isn't as silly as it seems
1) ~All buffers in Android are pipelined, usually with a queue depth of 2 or 3 depending on overall application performance. This means that missing a deadline is recoverable as long as it doesn't happen multiple times in a row. I'd also note that since Netflix probably only cares about synchronization and not latency during video play back they could have a buffer depth of nearly anything they wanted, but I don't think that's a knob Android exposes to applications.
2) The deadline is probably not the end of the current frame but rather than end of the next frame (i.e. ~18ms away) or further. The application can specify this with the presentation time EGL extension[1] that's required to be present on all Android devices.
[1]: https://www.khronos.org/registry/EGL/extensions/ANDROID/EGL_...
I thought this was a myth. I looked into it last year and found that power consumption was (roughly) proportional to brightness.
At a meta-level, I'm surprised that something with so factual (and testable) an answer can still not be settled.
It is absolutely settled, and has been tested over and over again. Power is roughly proportional to the amount of light emitted[1], so having dark grey is absolutely a power savings over pure white.
Google slides: https://www.theverge.com/2018/11/8/18076502/google-dark-mode... Display energy modeling: https://onlinelibrary.wiley.com/doi/am-pdf/10.1002/stvr.1635
[1]: This isn't totally true mostly because the display is broken into RGB elements emitting light of differing efficiencies and human perception of the brightness of those elements is not identical.
Edit: seems like he wrote about this before:
> Much of these problems with Google today stem from a lack of visionary leadership from Sundar Pichai, and his clear lack of interest in maintaining the cultural norms of early Google