Readit News logoReadit News
JZerf commented on Show HN: OS X Mavericks Forever   mavericksforever.com/... · Posted by u/Wowfunhappy
JZerf · 7 months ago
I also like the aesthetics of OS X versions prior to 10.10. It is hard to find programs that still support older versions though. To make things easier I'm using the more modern macOS 10.14 but installed a few things to make it look more like 10.9.

The file and instructions in https://forums.macrumors.com/threads/mavericks-window-contro... change many of the UI elements to look like those in 10.9.

Menu Bar Tint (https://manytricks.com/menubartint/) can make the menubar look like the one in 10.9.

macOSLucidaGrande (https://github.com/LumingYin/macOSLucidaGrande) can change the system font back to Lucida Grande like 10.9 used. One annoyance with this though is that the '*' character won't show up in password input fields.

JZerf commented on macOS Icon History   basicappleguy.com/basicap... · Posted by u/ksec
BeFlatXIII · 8 months ago
What do you dislike about modern Macbooks compared to 2012?
JZerf · 8 months ago
I'm actually typing this reply on a 2012 MacBook Pro which is still working pretty well. I've also used several recent MacBook models at work so am familiar with those. Things that I like better about the 2012 MacBook Pro compared to newer models is that it's easier to replace/upgrade items. On my 2012 MacBook Pro I've replaced the original hard drive with a SSD, upgraded the memory, and replaced the failed battery (which shouldn't be unexpected for such an old laptop) which were all fairly simple to do.

I also do not like that Apple has completely removed all USB type A ports on the newer MacBooks. USB type A plugs are still very common and I wish Apple left one or two on the newer MacBooks in addition to the USB-C ports. Yes, you can use USB-C to USB type A adapters but it is annoying.

I also do not like that Apple has removed the Ethernet and microphone jacks. Both jacks are still useful to have on modern computers. I'll make an exception for removing the Ethernet jack on the MacBook Air to accommodate a thinner chassis but wish the MacBook Pro chassis was kept thick enough to accommodate the Ethernet jack.

JZerf commented on Next month, saved passwords will no longer be in Microsoft’s Authenticator app   cnet.com/tech/microsoft-w... · Posted by u/ColinWright
reginald78 · 8 months ago
Are you sure you need the Microsoft one? After reading the giant support document at my employer I eventually figured out that any TOTP supporting app would work but most of the documentation made it sound like Microsoft was required anyway.
JZerf · 8 months ago
This depends on how the organization configures things. My company used to allow TOTP so many TOTP apps could be used instead of Microsoft Authenticator but my company disabled that a while ago. Now the only authenticator app my company allows is Microsoft Authenticator using push notifications (see https://learn.microsoft.com/en-us/entra/identity/authenticat... ). Consider yourself lucky if your employer allows you to use any TOTP app you want instead of forcing you to use Microsoft Authenticator.
JZerf commented on My failed attempt to shrink all NPM packages by 5%   evanhahn.com/my-failed-at... · Posted by u/todsacerdoti
gregmac · a year ago
5% off your next lunch and 5% off your next car are very much not the same thing.
JZerf · a year ago
Those lunches could add up to something significant over time. If you're paying $10 per lunch for 10 years, that's $36,500 which is pretty comparable to the cost of a car.
JZerf commented on Bypassing disk encryption on systems with automatic TPM2 unlock   oddlama.org/blog/bypassin... · Posted by u/arjvik
JZerf · a year ago
I was reading another web page (I don't have the link unfortunately) several days ago where another reader pointed out to the author the same type of attack mentioned in this article. To address that attack the author came up with the same solution you proposed and I do believe that is sufficient for preventing the type of attack mentioned in this article. There still are other types of attacks (cold boot attack, sniffing TPM traffic, etc...) that can be done though so it still is a good idea to use a PIN/password, network bound disk encryption, etc... in addition to the the TPM.

I'm currently working on setting up disk encryption for a new home server and as an additional precaution I'm also working on getting the initrd to do a few additional sanity checks prior to decrypting a LUKS partition and prior to mounting the root file system within. One check which I think will be highly effective is that prior to decrypting the LUKS partition I have the initrd hash the entire LUKS header and make sure it has the expected value before allowing the boot to continue. So far it seems to be working OK but hashing the entire LUKS header is overkill which will require some care to make sure the expected hash value is kept updated if the LUKS header changes for some reason (like changing encryption passwords). I can not recommend this idea for everyone consequently.

JZerf · a year ago
Found the page I had mentioned earlier: https://pawitp.medium.com/the-correct-way-to-use-secure-boot... . In the comments Aleksandar mentioned the possibility of using the attack mentioned in this article and the author replied back with the same solution of verifying a secret file on the root partition.
JZerf commented on Bypassing disk encryption on systems with automatic TPM2 unlock   oddlama.org/blog/bypassin... · Posted by u/arjvik
blastrock · a year ago
Very clever!

I am the author of one of the older guides https://blastrock.github.io/fde-tpm-sb.html .

I was wondering about the solution you propose which seems a bit complicated to me. Here's my idea, please tell me if I'm completely wrong here.

What if I put a file on the root filesystem with some random content (say 32 bytes), let's name it /prehash. I hash this file (sha256, blake2, whatever). Then, in the signed initrd, just after mounting the filesystem, I assert that hash(/prehash) == expected_hash or crash the system otherwise. Do you think it would be enough to fix the issue?

JZerf · a year ago
I was reading another web page (I don't have the link unfortunately) several days ago where another reader pointed out to the author the same type of attack mentioned in this article. To address that attack the author came up with the same solution you proposed and I do believe that is sufficient for preventing the type of attack mentioned in this article. There still are other types of attacks (cold boot attack, sniffing TPM traffic, etc...) that can be done though so it still is a good idea to use a PIN/password, network bound disk encryption, etc... in addition to the the TPM.

I'm currently working on setting up disk encryption for a new home server and as an additional precaution I'm also working on getting the initrd to do a few additional sanity checks prior to decrypting a LUKS partition and prior to mounting the root file system within. One check which I think will be highly effective is that prior to decrypting the LUKS partition I have the initrd hash the entire LUKS header and make sure it has the expected value before allowing the boot to continue. So far it seems to be working OK but hashing the entire LUKS header is overkill which will require some care to make sure the expected hash value is kept updated if the LUKS header changes for some reason (like changing encryption passwords). I can not recommend this idea for everyone consequently.

JZerf commented on ZFS 2.3 released with ZFS raidz expansion   github.com/openzfs/zfs/re... · Posted by u/scrp
dizhn · a year ago
>I sure hope I've upgraded SSDs by the year 2065.

My mind jumped at that too when I first read parent's comment. But presumably he's writing other files to disk too. Not just that one file. :)

JZerf · a year ago
> But presumably he's writing other files to disk too. Not just that one file.

Yes, there will be much more going on than the simple test I was doing. The server will be hosting several VMs running a mix of OSes and distros and running many types types of services and apps.

JZerf commented on ZFS 2.3 released with ZFS raidz expansion   github.com/openzfs/zfs/re... · Posted by u/scrp
craftkiller · a year ago
One knob you could change that should radically alter that is zfs_txg_timeout which is how many seconds ZFS will accumulate writes before flushing them out to disk. The default is 5 seconds, but I usually increase mine to 20. When writing a lot of data, it'll get flushed to disk more often, so this timer is only for when you're writing small amounts of data like the test you just described.

> like might happen for some programs that are writing logs fairly continuously

On Linux, I think journald would be aggregating your logs from multiple services so at least you wouldn't be incurring that cost on a per-program basis. On FreeBSD with syslog we're doomed to separate log files.

> over a 10 year span that same program running continuously would result in about 100 TB of writes being done to the drive which is about a quarter of what my SSD is rated for

I sure hope I've upgraded SSDs by the year 2065.

JZerf · a year ago
> One knob you could change that should radically alter that is zfs_txg_timeout which is how many seconds ZFS will accumulate writes before flushing them out to disk.

I don't believe that zfs_txg_timeout setting would make much of a difference for the test I described where I was doing synchronous writes.

> On Linux, I think journald would be aggregating your logs from multiple services so at least you wouldn't be incurring that cost on a per-program basis.

The server I'm setting up will be hosting several VMs running a mix of OSes and distros and running many types types of services and apps. Some of the logging could be aggregated but there will be multiple types of I/O (various types of databases, app updates, file server, etc...) and I wanted to get an idea of how much file system overhead there might be in a worst case kind of scenario.

> I sure hope I've upgraded SSDs by the year 2065.

Since I'll be running a lot of stuff on the server, I'll probably have quite a bit more writing going on than the test I described so if I used ZFS I believe the SSD could reach its rated endurance in just several years.

JZerf commented on ZFS 2.3 released with ZFS raidz expansion   github.com/openzfs/zfs/re... · Posted by u/scrp
craftkiller · a year ago
No need to keep it to yourself. As you've mentioned, all of these requirements are misinformation so you can ignore people who repeat them (or even better, tell them to stop spreading misinformation).

For those not in the know:

You don't need to use enterprise quality disks. There is nothing in the ZFS design that requires enterprise quality disks any more than any other file system. In fact, ZFS has saved my data through multiple consumer-grade HDD failures over the years thanks to raidz.

The 1 gig per TB figure is ONLY for when using the ZFS dedup feature, which the ZFS dedup feature is widely regarded as a bad idea except in VERY specific use cases. 99.9% of ZFS users should not and will not use dedup and therefore they do not need ridiculous piles of ram.

There is nothing in the design of ZFS any more dangerous to run without ECC than any other filesystem. ECC is a good idea regardless of filesystem but its certainly not a requirement.

And you don't need x5 disks of redundancy. It runs great and has benefits even on single-disk systems like laptops. Naturally, having parity drives is better in case a drive fails but on single disk systems you still benefit from the checksumming, snapshotting, boot environments, transparent compression, incremental zfs send/recv, and cross-platform native encryption.

JZerf · a year ago
One reason why it might be a good idea to use higher quality drives when using ZFS is because it seems like in some scenarios ZFS can result in more writes being done to the drive than when other file systems are used. This can be a problem for some QLC and TLC drives that have low endurance.

I'm in the process of setting up a server at home and was testing a few different file systems. I was doing a test where I had a program continuously synchronously writing just a single byte every second (like might happen for some programs that are writing logs fairly continuously). For most of my tests I was just using the default settings for each file system. When using ext4 this resulted in 28 KB/s of actual writes being done to the drive which seems reasonable due to 4 KB blocks needing to be written, journaling, writing metadata, etc... BTRFS generated 68 KB/s of actual writes which still isn't too bad. When using ZFS about the best I could get it to do after trying various settings for volblocksize, ashift, logbias, atime, and compression settings still resulted in 312 KB/s of actual writes being done to the drive which I was not pleased with. At the rate ZFS was writing data, over a 10 year span that same program running continuously would result in about 100 TB of writes being done to the drive which is about a quarter of what my SSD is rated for.

JZerf commented on Assassin's Creed publisher Ubisoft facing lawsuit for shutting down game   gamingbible.com/news/plat... · Posted by u/bookofjoe
pmontra · 2 years ago
And yet I can put a Gran Turismo DVD in my old PS2 (was that GT4 or GT5?) and play with it. No networking (it never had that) but perfectly playable.

If your game needs a server to be played, you also sell the server. Somebody will put it on a machine somewhere and manage it. I have no direct experience of Minecraft but it used to be more or less like that, right?

JZerf · 2 years ago
Gran Turismo 4 was the last Gran Turismo for the PS2. It also did support networking for LAN gameplay but networking isn't required to play the game.

u/JZerf

KarmaCake day68January 5, 2021
About
https://JeremyZerfas.com
View Original