https://datarecovery.com/rd/why-does-it-take-multiple-passes...
If a drive contained state secrets, I might use /dev/urandom instead of /dev/zero, but those kinds of drives are probably just shredded.
https://datarecovery.com/rd/why-does-it-take-multiple-passes...
If a drive contained state secrets, I might use /dev/urandom instead of /dev/zero, but those kinds of drives are probably just shredded.
Losing weight is simple in theory, you can just eat less. In practice, eating less is very hard for some people. Having real time glucose information isn't going to help those people.
Most HN folks think diversity is a good thing, and I'm not saying it isn't, but it does have its disadvantages. In my case, I could probably buy new Android phones at least 3x more often than iPhones based on cost, but a lot of people (me) don't want to be fiddling with new phones every year or 2. It was apparent to me that Android updates are not tested thoroughly on older phones. I understand that would be hard because there is a huge variety of hardware, but it's a significant downside of Android IMO.
On the other side Go's errors are more work for the programmer and they clutter the code. But if you consequently wrap errors in Go, you do not need stack traces any more. And the advantage of wrapped errors with descriptive error messages is: they are much easier to read for non-programmers.
If you want to please the dev-team: use exceptions and stack traces. If you want to please the op-team: use wrapped errors with descriptive messages.
For situations where an unexpected error is retried, eg, accessing some network service, unexpected errors have a compressed stack trace string included with the context error message. The compressed stack trace has the program commit id, Python source file names (not pathnames) and line numbers strung together, and a context error message, like:
[#3271 a 25 b 75 c 14] Error accessing server xyz; http status 525
Then the user gets an idea of what went wrong, doesn't get overwhelmed with a lot of irrelevant (to them) debugging info, and if the error is reported, it's easy to tell what version of the program is running and exactly where and usually why the error occurred.
One of the big reasons I haven't switched from Python to Go for HashBackup (I'm the author) is that while I'd love to have a code speed-up, I can't stomach the work involved to add 'if err return err("blah")' after most lines of existing code. It would hugely (IMO) bloat the existing codebase.
I don't think I have delusions of grandeur, I worry that the cost of exterminating people algorithmically could become so low that they could decide to start taking out small fries in batches.
A lot of narratives which would have sounded insane 5 years ago actually seem plausible nowadays... Yet the stigma still exists. It's still taboo to speculate on the evils that modern tech could facilitate and the plausible deniability it could provide.
My guess is that the cost of taking out a small fry today is already extremely low, and a desperate low-life could be hired for less than $1000 to kill a random person that doesn't have a security detail.
Some people qualify for a tax subsidy that can be anywhere from $0 to the entire cost of a plan, depending on their income. A unique feature is that the subsidy is based on your expected income for the upcoming year, but if you make less than that (are laid off for example) or more (independent contract gets an unexpected contract), the subsidy is adjusted when you file your taxes.
Currently the ACA does not accept anyone who has a policy through work. IMO, every should have the option of getting ACA healthcare coverage. If their work coverage is better or cheaper, they can stick with that, but if their work coverage is worse or more expensive, employees should be allowed to get ACA coverage, with the employer paying part or all of the subsidy (what they would have paid to a private insurance company for the employee) instead of just the government.
I believe that that using non-ECC RAM is a potential cause of silent disk errors. If you read a sector without error, then a cosmic ray flips a bit in RAM containing that sector, you now have a bad copy of the sector with no error indication. Even if the backup software does a hash of the bad data and records it with the data, it's too late: the hash is of bad data. If you are lucky and the hash is created before the RAM bit flip, at least the hash won't match the bad data, so if you try to restore the file, you'll get an error at restore time. It's impossible to recover the correct data, but at least you'll know that.
The good news is that if you backup the bad data again, it will be read correctly, and be different from the previous backup. The bad news is, most backup software skips files based on metadata such as ctime and mtime, so until the file changes, it won't be re-saved.
We are so dependent on computers these days, it's a real shame that all computers don't come standard with ECC RAM. The real reason for that is that server menufacturers want to charge higher prices to data centers for "real" servers with ECC.