Readit News logoReadit News
murderberry commented on Cities turn to ‘extreme’ water recycling   e360.yale.edu/features/on... · Posted by u/CoBE10
lucidguppy · 2 years ago
This stuff is good - but it's penny wise.

Agriculture is outright wasteful of water. California agriculture consumes 80% of the state's water.

https://water.ca.gov/Programs/Water-Use-And-Efficiency/Agric...

Its an environmental equivalent of Amdahl's law - spending so much effort to make a small portion of the water use efficient when we can work far less to make agriculture more efficient. Of course its all because of lobbying.

murderberry · 2 years ago
It's pretty complex, though. If a farmer pumps water out of the aquifer directly underneath, irrigates crops, and most of the water (minus evaporation and crop biomass) is returned to the aquifer in a matter of days... is it fair to say the farmer wasted it? Modern irrigation systems easily have an efficiency of 80-90%.

Some irrigated farms in the Central Valley will be withdrawing from aqueducts, but part of the reason why the valley is dry is because we built these aqueducts, harming agricultural land for the benefit of SoCal cities, with the promise that the farmers would be able to use that water. So not sure it's fair for us to claim the moral high ground.

Much of the California water crisis is manufactured too. There's no shortage of freshwater for the foreseeable future, but we're not building new dams, aqueducts, etc, essentially relying on the infrastructure built in the 1960s and before, for a population only fraction of what we have right now. Climate change plays a role, but the bulk of the pain is self-inflicted and has little to do with growing rice or watering our lawns.

murderberry commented on LCD TVs won’t see any further development   tomsguide.com/news/its-of... · Posted by u/belltaco
kitsunesoba · 2 years ago
Not entirely sure but I think OLED pixels modulate brightness with PWM, which causes high frequency (though invisible) flickering.
murderberry · 2 years ago
LCDs have PWM-dimmed backlights, so if your laptop isn't set to maximum brightness, you're probably staring at a flickering surface too.

Perhaps a simpler explanation is just contrast? Both OLEDs and CRTs can produce much higher contrast than LCDs.

murderberry commented on Growing from engineer to manager   newsletter.eng-leadership... · Posted by u/gregorojstersek
usrnm · 2 years ago
This trope of "growing" from an engineer to a manager needs to die. I grow as an engineer when I learn new stuff or become better at building things. Becoming a manager is changing jobs, not growth in any meaningful way. Not even in salary in many places
murderberry · 2 years ago
Yes and no. It's a common trope at many tech companies that these are two completely separate tracks, with no special upside to either.

But then, here's the reality: any large company will have a new for an army of directors and VPs, compared to only a handful of ultra-senior, visionary engineers. Take Google and compare the ratio of Jeff Dean-type folks to senior managerial staff collecting similar paychecks.

So yeah, if your goal is to retire early without depending too much on luck or on being exceptional, management is your career growth path. For better or worse.

murderberry commented on Ted Kaczynski has died   nytimes.com/2023/06/10/us... · Posted by u/mfiguiere
moffkalast · 2 years ago
I don't suppose anyone has a short summary?
murderberry · 2 years ago
"Technology bad."

It's really that. He falls for the same trap many modern critics of progress do: the nostalgia for a world that never existed, when men lived meaningful lives in peaceful harmony with nature... juxtaposed with all the purported moral, societal, and environmental decay of today.

Many people find it alluring today, but the themes are evergreen. They crop up in ancient Greece, in the Middle Ages, and throughout history.

Misplaced nostalgia aside, another problem with most such ideologies is that the prescription for returning to that utopian bygone era inevitably involves force: the premise is that our minds are too corrupted to understand what's right. Whether that's blowing things up or taking away your rights is just an implementation detail.

murderberry commented on Google cuts office space in Bay Area by more than a million square feet   mv-voice.com/news/2023/06... · Posted by u/vrangan1989
murderberry · 2 years ago
Real estate is just a slog. It takes years to build new office space. Lease opportunities come up on weird schedules and tend to be long-term too. They're likely offloading the space they started acquiring pre-COVID, or in the first year of (when the industry was hiring like crazy), because they're not optimistic about being able to fill it any time soon.

They could probably spread things out instead, but real estate is stupidly expensive, and Google was never known for spacious accommodations (maybe except for some remote offices). I can't imagine they have any motivation to spend more if their approach worked fine for more over a decade.

I don't love it, but how many applicants walked away because of cramped open spaces? How many top performers quit for that reason? We just put up with it.

murderberry commented on Weird GPT-4 behavior for the specific string “ davidjl”   twitter.com/goodside/stat... · Posted by u/goranmoomin
Buttons840 · 2 years ago
This is completely tangential, but I want to share my latest GPT4 story somewhere.

I've tried before to gaslight GPT4 into saying things which are mathematically untrue, I lie to it, I tell it it's malfunctioning, I tell it to just do it, it wouldn't do it.

I was recently studying linear algebra which can be a very tricky subject. In linear algebra the column space of a matrix is the same as the column space of the product with itself transposed: C(A) = C(AA^T). If you ask GPT4 if "C(A) = C(AA^T)" is true, it will understand what you're asking, it knows it's about linear algebra, but it will get it wrong (at the time of this writing, I've tried several times).

I couldn't get GPT4 to agree it was a true statement until I told it the steps of the proof. Once it saw the proof it agreed it was a true statement. However, if you try to apply the same proof to C(A) = C((A^T)A), GPT4 cannot be tricked, and indeed, the proof is not applicable to this latter case.

So GPT4 was incorrect yet able to be persuaded with a correct proof, but a very similar proof with a subtle mistake cannot trick it.

murderberry · 2 years ago
"Persuaded" is a loaded word here, and I think you're anthropomorphizing it a bit too much.

Early LLMs were very malleable, so to speak: they would go with the flow of what you're saying. But this also meant you could get them to deny climate change or advocate for genocide by subtly nudging them with prompts. A lot of RLHF work focused on getting them to give brand-safe, socially acceptable answers, and this is ultimately achieved by not giving credence to what the user is saying. In effect, the models pontificate instead of conversing, and will "stand their ground" on most of the claims they're making, no matter if right or wrong.

You can still get them to do 180 turns or say outrageous things using indirect techniques, such as presenting external evidence. That evidence can be wrong / bogus, it just shouldn't be phrased as your opinion. You can cite made-up papers by noted experts in the field, reference invalid mathematical proofs, etc.

It's quite likely that you replicated this, and that it worked randomly in one case but not the other. I'd urge you to experiment with it by providing it with patently incorrect but plausibly-sounding proofs, scientific references, etc. It will "change its mind" to say what you want it to say more often than not.

murderberry commented on Mastodon provides the highest (over 12%) engagement under posts   climatejustice.rocks/@kat... · Posted by u/ZacnyLos
mjr00 · 2 years ago
The methodology is "(likes + shares + comments)/followers", and the two platforms that allegedly provide the highest engagement are the ones with an order of magnitude fewer followers.

A more likely explanation is that fewer followers means higher engagement, which totally makes sense; for a larger social network account with 100k+ followers, many of those will be bots and people who follow anyone and everyone without really caring. A smaller account will only have followers who care.

I don't think there's anything specific about Mastodon here other than it being a smaller social network so you naturally have far fewer followers.

In either case, this is a single account talking about a single post, and shouldn't be used to generalize different levels of engagement across social networks.

murderberry · 2 years ago
Another fairly reasonable hypothesis is the difference in account age. If you've been on Twitter for a decade, a lot of the accounts following you are just dead: people who no longer use the platform, who forgot their passwords, etc. The followers you have on Mastodon are more likely to be engaged because the vast majority joined in the past 12 months or so.
murderberry commented on The greatest risk of AI is from the people who control it, not the tech itself   aisnakeoil.substack.com/p... · Posted by u/nickwritesit
phillipcarter · 2 years ago
A specific risk I am worried about today is using AI to power and make impactful decisions in high-risk infrastructure that people rely on. I do not want my power company making decisions about me based on a large language model that regularly gets things wrong. Not without significant controls in place, explainability/audibility, and the ability quickly reverse any bad decision. Replace "power company" with the numerous things we all rely on today and it freaks me out.

People are jumping too quickly to deeply integrate this tech with everyday things, and while that's great for many use cases, it's not so great for others.

You won't find the likes of Yudkowsky talking about this because they'd rather go on about their fan fiction than actually work to prevent real problems we may face soon. The fact that they've tricked some really smart people into going along with their shenanigans is extremely disappointing.

murderberry · 2 years ago
A power company is unlikely to do this, in part because they are not an industry that fetishizes growth at any cost with the value of individual users approaching zero. And in part because in many markets, they're regulated and need to provide service.

But our industry already operates this way. Google will cut you off for triggering automated rules, and good luck getting human help. AI will not make it worse; but it will be used by such businesses to give their CS the appearance of being better. It will feel like you're talking to a real person again.

u/murderberry

KarmaCake day60May 31, 2023View Original