Readit News logoReadit News
opejn commented on The Science Is Clear: Dirty Farm Water Is Making Us Sick   wired.com/story/the-scien... · Posted by u/rafaelc
gregmac · 7 years ago
Very interesting points in this thread:

> Testing your irrigation water won't keep the feedlot next door from blowing dried-up cow shit dust all over your lettuce. (Or, you know, into your irrigation water on a day when you're not testing). [1]

> It won't keep harvest crew co's from putting so much time pressure on crews that they can't clean their harvest rigs properly between shifts. [2]

> And it sure won't solve our traceability problem. There's no way in hell we're having this much trouble tracing the lettuce to its source, unless there's massive supply chain fraud going on. We should be worried about THAT. [3]

This one reminds me of a situation I ran into once:

> (You also have to ask, WHY would a farm or any other business target their testing towards when they're likely to get red flags? They don't want that. Any time you do testing, there are lots of ways to game it to lean towards clean results. Which is what farms & companies want.) [4]

We were installing a drinking water monitoring system at a rural retirement home (on well water), probably a couple hundred residents.

Up to that point, one of the tests they did was a manual daily chlorine level check. Essentially you need to have a certain amount of residual chlorine in the water at the point of use, which lets you know you you are putting enough in (if you put too little in, it gets used up as it's breaking down organics in the water, and you end up with a zero). If the level is too low, you have to report it and explain what happened and how you fixed it, and if it happens too much, inspectors will start looking closely at your practices and forcing fixes and basically nobody likes that.

The new system had continuous monitoring of several parameters, one of them being chlorine, and would record everything as well as send alerts to people if anything dangerous happened. The chlorine monitor was installed at the opposite end of the building from the water supply, trying be representative of the worst case 'point of use' level.

As soon as it was online, it started sending alerts every night in the middle of the night. The free chlorine level would drop starting late in the evening (as the usage dropped), and then somewhere between 2 and 5 am would drop below the alarm threshold, and around 6-7 am jump back to normal. The owner was pissed that we had screwed the system up, because they didn't have this problem before the monitoring system was in.

Of course what was happening was not new, it was just that as usage dropped off as the residents went to bed, the water in the pipes just sat stagnant, and residual chlorine was slowly used up to the point there was none left. The daily checks were usually done mid-morning, after there was a lot of use as everyone woke up, and so all the water in the lines was freshly chlorinated when the check was done.

For years, anyone getting water in the middle of the night was drinking technically unsafe water (it still had been treated and likely posed no real health risk, so long as there was no contamination in any of the pipes or fixtures), and all that happened was the situation was exposed.

We eventually added a solenoid to the end of the line, and programmed it so it would open and dump water when the chlorine level got low but before the alarm threshold, at least getting fresh water into the main distribution lines of the building and avoiding the alarm condition.

It always stuck with me that the owners reaction was not "Oh, thank you for finding and fixing a problem I didn't even know about before it got someone really sick" but more like "your stupid system made things WORSE, we should rip it out and go back to how it was before."

[1] https://mobile.twitter.com/SarahTaber_bww/status/10670961003...

[2] https://mobile.twitter.com/SarahTaber_bww/status/10670961010...

[3] https://mobile.twitter.com/SarahTaber_bww/status/10670961017...

[4] https://mobile.twitter.com/SarahTaber_bww/status/10670961060...

opejn · 7 years ago
Thank you for sharing your story. I hope you don't mind if I turn it into a thought experiment, because I find it a good example of how tricky finding and fixing a problem can be. I know nothing about water quality standards, so they might be naive, but the following questions stick out to me:

* Is there a reasonable contamination hazard from water sitting in pipes that are flushed daily with chlorinated water? In other words, is it vital that there always be residual chlorine 100% of the time, or were the standards designed to accomodate this sort of periodic lapse?

* If the dips were harmless, would the inspectors be willing to accept that?

* Was the dump solenoid effective at flushing the entire network of pipes, or just the branch containing the sensor?

Depending on the answers, the end result could range all the way from "the new system let us eliminate a serious hazard", through "we were probably fine before but now we can be sure at a little extra cost", down to "now we have to waste water to avoid tripping a sensor so we don't get fined, with no actual improvement to the water quality". It's a great example of how what we want, what we test, and what we enforce can get just a little out of alignment, and make a hinderance out of what ought to be a definite improvement. Thanks again for sharing.

opejn commented on C for All   plg.uwaterloo.ca/~cforall... · Posted by u/etrevino
poizan42 · 8 years ago
> a mythical operator that doesn't actually make much sense.

Hmm I'm pretty sure practically every programming languages have it. It usually looks like "!=" or "<>".

The even more obscure logical XNOR is usually denoted "==" or "="

opejn · 8 years ago
Yes! In C its full spelling is "!a != !b".
opejn commented on Remote code execution, git, and OS X   rachelbythebay.com/w/2016... · Posted by u/ingve
atdt · 10 years ago
It's a bit too precarious to be an adequate solution, in my opinion. It depends on /usr/local/bin always being ahead of /usr/bin in $PATH, and on scripts never invoking the system git via its full path, and on Homebrew never accidentally uninstalling git due to a botched upgrade. Not to mention the fact that Homebrew itself uses the system git to install itself.
opejn · 10 years ago
> Not to mention the fact that Homebrew itself uses the system git to install itself.

To me, this is the biggest problem, and it's not just Homebrew. Any source package manager that uses Git will potentially have this problem. With a vulnerable Git on your system, you have to second-guess every build script you ever run that might make use of Git, to make sure it obeys the path you set instead of choosing its own.

opejn commented on Show HN: Sensible Bash: An attempt at saner Bash defaults   github.com/mrzool/bash-se... · Posted by u/mrzool
riquito · 10 years ago
Does anyone know if bashscript could be updated to allow spaces between assignment? I can't see how it would break existing programs (but I'm sure there's a catch) and scripts would be much cleaner (and less of an headache).
opejn · 10 years ago
A breaking example might be trying to find lines containing an "=" in a file:

    grep = my_file
Also a problem is the syntax for running a program with environment assignments that apply only to the program:

    env1=foo env2=bar env3= my_program
Note that under POSIX rules, "env3" here is assigned a zero-length string. Making these sorts of assignments work with spaces around the equal signs would open up a can of worms.

opejn commented on Google says self-driving car hits municipal bus in minor crash   reuters.com/article/googl... · Posted by u/sajal83
seanp2k2 · 10 years ago
The issue for me is drive-by-wire. I'm cool with a computer trying to steer as long as it gives up when I try to fight it. I also preferred the cruise control where you could feel the pedal moving under your foot, because that was the master control tied to the carburetor or sensor that managed fuel.

I still think that self-driving stuff should be more enhanced cruise control and less "there is no steering wheel or controls".

opejn · 10 years ago
The problem with this is the limits of human attention. It's hard enough to maintain focus on long drives as it is; if the "enhanced cruise control" takes over the job entirely, the driver will have nothing to do and is likely to stop paying attention to the road at all. Then he'll either miss his chance to take manual control, or do so in a state of panic.

Google has been making arguments along these lines -- for instance: http://gizmodo.com/why-self-driving-cars-really-shouldnt-eve...

opejn commented on Sci-Hub as necessary, effective civil disobedience   bjoern.brembs.net/2016/02... · Posted by u/ingve
return0 · 10 years ago
i think scihub uses various proxies in multiple universities to fetch the articles.
opejn · 10 years ago
I think the question was whether we can easily mass-download the papers from Scihub.
opejn commented on dd – Destroyer of Disks (2008)   noah.org/wiki/Dd_-_Destro... · Posted by u/opensourcedude
SeldomSoup · 10 years ago
> If you want to erase a drive fast then use the following command (where sdXXX is the device to erase):

    dd if=/dev/zero of=/dev/sdXXX bs=1048576
Question: is there a disadvantage to using a higher blocksize? Is the read/write speed of the device the only real limit?

opejn · 10 years ago
> is there a disadvantage to using a higher blocksize?

Maybe, depending on the details. Imagine reading 4 GB from one disk then writing it all to another, all at 1 MB/sec. If your block size is 4 GB, It'll take 4000 seconds to read, then another 4000 seconds to write... and will also use 4 GB of memory.

If your block size is 1 MB instead, then the system has the opportunity to run things in parallel, so it'll take 4001 seconds, because every read beyond the first happens at the same time as a write.

And if your block size is 1 byte, then in theory the transfer would take almost exactly 4000 seconds... except that now the system is running in circles ferrying a single byte at a time, so your throughput drops to something much less than 1 MB/sec.

In practice, a 1 MB block size works fine on modern systems, and there's not much to be gained by fine-tuning.

opejn commented on dd – Destroyer of Disks (2008)   noah.org/wiki/Dd_-_Destro... · Posted by u/opensourcedude
drzaiusapelord · 10 years ago
>but it's probably not enough to persuade other people that it's gone.

I believe there is a long standing bounty for anyone who can retrieve useful data from a drive that had been zero'd once. No one has been able to thus far.

A lot of the disk wiping "culture" stems from a much earlier time when disk technology was less reliable, especially in regards to writes. Dan Gutmann himself says that the Gutmann method is long antiquated and only worked with MFM/RLL encoded disks from the 80s and early 90s.

Perhaps instead of humoring these people, we should be educating them. A zero'd out disk is a wiped disk until someone proves otherwise.

opejn · 10 years ago
This reminds me of assertions we used to take for granted about DRAM. We used to assume that the contents are lost when you cut the power, but then someone turned a can of cold air on a DIMM. We usually assume that bits are completely independent of each other, but then someone discovered the row hammer. The latter is especially interesting because it only works on newer DIMM technology. Technology details change, and it's hard to predict what the ramifications will be. A little extra caution isn't necessarily a bad thing.
opejn commented on Deprecating Non-Secure HTTP   blog.mozilla.org/security... · Posted by u/talideon
joshmn · 11 years ago
I can imagine a lot of personal sites will suffer from this. With most, they're sitting on something like Eleven2 or Dreamhost, who requires a dedicated IP for an SSL certificate, which the user then has to buy and figure out for himself (it's not trivial for the average "webmaster"), or buy the certificate from their host which is marked-up plenty.

Yes, the hosts could wildcard. Yes, there are other solutions out there. But for the average Joe who is blogging about his vacations and family? They're going to be completely lost.

Why don't shared hosts just wildcard? Shared certificate? Well, let's think about it... Charging ~$5/month/dedicated IP is a nice upsell, and getting $70 for an installed SSL cert that costs them $10 from their SSL cert reseller, that takes them 2 minutes to configure... That's a nice slice of pie. I'd take that bet any day.

opejn · 11 years ago
I think you're overstating how bad things are. Dreamhost, for example, no longer requires a dedicated IP for SSL, though they do still recommend it for e-commerce. They are charging $15/year for a CA-signed certificate. Granted, that's for a single-site cert and they don't support wildcards under this scenario, but the vacation blogger isn't likely to need that anyway.
opejn commented on Opportunistic Encryption for Firefox   bitsup.blogspot.com/2015/... · Posted by u/cpeterso
Dylan16807 · 11 years ago
> 443 is a good choice :)

If it's self signed, and going to throw massive warnings with a direct connection, shouldn't you use anything other than 443?

Any subtleties I should be aware of?

opejn · 11 years ago
The main reason I would think it's a good choice is because if you decide to get a CA certificate later, you just drop it in and you're done; no additional configuration required.

If you don't have a CA certificate, you're probably not advertising your https:// URLs anyway, so unless search engines are aggressively looking/prioritizing for https transport, it wouldn't seem to hurt anything to run a self-signed certificate there.

u/opejn

KarmaCake day18April 11, 2014View Original