They're still disabled by default on FreeBSD - my PR is pending, and the patch has been in testing in ports for a while: https://github.com/php/php-src/pull/12288
They're still disabled by default on FreeBSD - my PR is pending, and the patch has been in testing in ports for a while: https://github.com/php/php-src/pull/12288
https://www.usenix.org/conference/atc22/presentation/khrabro...
https://developer.ibm.com/articles/jitserver-optimize-your-j...
It also has a "dynamic AOT compiler", so first-run stuff can be JITed and cached for future execution instead of it all starting out interpreted every time.
I'm sure they're greatly simplified in comparison - it isn't trying to simulate complex interpersonal relationships or painstakingly track everyone's hair growth, but they still have a decent amount of detail to them given the scale.
The demo is just an older version of the full game (usually lagged behind 3 major releases, not sure where it is now, I think it's more up to date?) - and far from making it feel like I didn't need to pay for the thing to enjoy it, it instead made it an easy buy.
1,000 iterations isn't remotely generous for JRuby, unfortunately - JVM's Tier-3 compilation only kicks in by default around 2,000, and full tier-4 is only considered beyond 15,000. I've observed this to have quite a substantial effect, for instance bringing manticore (JRuby wrapper for Apache's Java HttpClient) down from merely "okay" performance after 10,000 requests to pretty much matching the curb C extension under MRI after 20,000.
You can tweak it to be more aggressive, but I guess this puts more pressure on the compiler threads and their memory use, while reducing the run-time profiling data they use to optimize most effectively. It perhaps also risks more churn from deoptimization. I kind of felt like I'd be better off trying to formalise the warmup process.
It's rather a shame that all this warmup work is one-shot. It would be far less obnoxious if it could be preserved across runs - I believe some alternative Java runtimes support something like that, though given JRuby's got its own JIT targetting Java bytecode I dare say it would require work there as well.
After an enormously unpleasant debugging cycle, we realized that the JIT compiler was incorrectly eliminating a call to System::arrayCopy, which meant that some fields were left uninitialized. But only when JIT compiled, non-optimized code ran fine.
This left us with three possible upgrade paths:
* Upgrade thrift to a newer version and hope that JIT compilation works well on it. But this is a nightmare since A) thrift is no longer supported, and B) new versions of thrift are not backwards compatible so you have to bump a lot of dependent libraries and update code for a bunch of API changes (in a LARGE number of services in our monorepo...). With no guarantee that the new version would fix the problem.
* File a bug report and wait for a minor version fix to address the issue.
* Skip this LTS release and hope the JIT bug is fixed in the next one.
* Disable JIT compilation for the offending functions and hope the performance hit is negligible.
I ultimately left the company before the fix was made, but I think we were leaning towards the last option (hopefully filing a bug report, too...).
There's no way this is the normal reason companies don't bump JRE versions as soon as they come out, but it's happened at least once. :-)
In general there's probably some decent (if misguided) bias towards "things are working fine on the current version, why risk some unexpected issues if we upgrade?"
The end result of my own investigation led to this quite satisfying thread on hotspot-compiler-dev, in which an engineer starts with my minimal reproduction of the problem and posts a workaround within 24 hours: https://mail.openjdk.org/pipermail/hotspot-compiler-dev/2021...
There's also a tip there: try a fastdebug build and see if you can convert it into an assertion failure you can look up.
Also, it's unclear to me what happens if you attempt a snapshot in the middle of something like a database transaction or even a basic file write. Seems likely that the snapshot would still be corrupted. So for databases you're stuck using db-specific methods like pg_dump.
All this complexity makes it very difficult to make self-hosting realistic and safe by default for non-experts, which is the problem I'm having.
[0]: https://forum.restic.net/t/what-happens-if-file-changes-duri...
[1]: https://learn.microsoft.com/en-us/windows-server/storage/fil...
Now of course it's all about ZFS, so there's at least snapshots paired with replication - but the story for anything else is still pretty bad, with you having to put all the fiddly pieces together. I'm sure some people taught their backup tool about their special named backup snapshots sprinkled about in `.zfs/snapshot` directories, but given the fiddly nature of it I'm also sure most people just ended up YOLOing raw directories, temporal-smearing be damned.
I know I did!
I finally got around to fixing that last year with zfsnapr[1]. `zfsnapr mount /mnt/backup` and there's a snapshot of the system - all datasets, mounted recursively - ready for whatever backup tool of the year is.
I'm kind of disappointed in mentioning it over on the Practical ZFS forum that the response was not "why didn't you just use <existing solution everyone uses>", but "I can see why that might be useful".
Well, yes, it makes backups actually work.
> Also, it's unclear to me what happens if you attempt a snapshot in the middle of something like a database transaction or even a basic file write. Seems likely that the snapshot would still be corrupted
A snapshot is a point-in-time image of the filesystem at a given point. Any ACID database worth the name will roll back the in-flight transaction just like they would if you issued it a `kill -9`.
For other file writes, that's really down to whether or not such interruptions were considered by the writer. You may well have half-written files in your snapshot, with the file contents as they were in between two write() calls. Ideally this will only be in the form of temporary files, prior to their rename() over the data they're replacing.
For everything else - well, you have more than one snapshot backed up, right?
Perhaps you could be more specific, because the former is exactly what a filesystem snapshot is meant to do, and the latter is exactly what an ACID database is meant to allow assuming the former.
> Look at what Kanister does with its recipes to get consistent DB snapshots
I looked at a few examples and they mostly seemed to involve running the usual database dump commands.
Server: caddy
Header that is impossible to turn off since it's hardcoded here: https://github.com/caddyserver/caddy/blob/master/modules/cad...The developer's annoying response is "it doesnt improve privacy or security, so we won't give you the option to remove it".
header -Server
However as this isn't global configuration it'll tend to pop back up in implicit configs like HTTP redirects and error handling if not overridden.
I have the newest paperwhite (prior to the one announced here) and it is incredibly fast and zippy compared to the kindles of old. And they claim the new one is even 25% faster.