Readit News logoReadit News
jofer commented on Best practices for dealing with human waste in the great outdoors   theconversation.com/how-t... · Posted by u/rntn
amarant · 4 days ago
Why is human fecal matter worse for the environment than animal fecal matter?

Something in our diets?

jofer · 3 days ago
In addition to disease, a key issue in many climates is toilet paper. Your average deer isn't leaving around white paper that takes a decade (in dry climates) to go away. That's a non issue in wet areas, but a large one in deserts and more arid regions.
jofer commented on Ask HN: The government of my country blocked VPN access. What should I use?    · Posted by u/rickybule
herodoturtle · 7 days ago
On a related note, does anyone have insight into *why* the Indonesian government is doing this?
jofer · 7 days ago
jofer commented on Please Don't Promote Wayland   stoppromotingwayland.netl... · Posted by u/PKop
jofer · 23 days ago
As much as X11 is an overly complex and dated protocol, this article hits its mark well. My current desktop is actually running Wayland, but I still need X11 for a variety of reasons.

I see the development advantages of Wayland, but not the practical advantage as a user. And even as a developer, X11 is stable and well known (albeit definitely weird in places).

At the end of the day, things worked perfectly on X11 and my audio and video and various apps still glitch a lot on Wayland even after all these years. Most of that is not exactly Wayland's fault, but it highlights the advantage of X11. It's the devil you know (and everyone has worked out a lot of edge cases for).

jofer commented on Claude Code is all you need   dwyer.co.za/static/claude... · Posted by u/sixhobbits
zmmmmm · 24 days ago
> I have been told I should never be editing code directly and been told I must use AI tooling by various higher level execs and managers

Wow, this is really extreme. We certainly got to this point faster than I expected.

jofer · 23 days ago
To be fair, it's the higher level folks who are too far removed from things to have any actual authority. I've never heard a direct single-team engineering manager something like that. But yeah, CEOs say crazy crap. And we're definitely there, though to be fair, his exact quote was "I insist everyone try to have AI generate your code first before you try making any direct changes". It's not _quite_ as bad as what I described. But then the middle management buys in and says similar things. And we now have a company level OKR around having 80% of software engineers relying on AI tooling. It's a silly thing to dictate.
jofer commented on Claude Code is all you need   dwyer.co.za/static/claude... · Posted by u/sixhobbits
jofer · 24 days ago
I appreciate this writeup. I live in the terminal and work primarily in vim, so I always appreciate folks talking about tooling from that perspective. Little of the article is that, but it's still interesting to see the workflow outlined here, and it gives me a few ideas to try more of.

However, I disagree that LLMs are anywhere near as good as what's described here for most things I've worked with.

So far, I'm pretty impressed with Cursor as a toy. It's not a usable tool for me, though. I haven't used Claude a ton, though I've seen co-workers use it quite a bit. Maybe I'm just not embracing the full "vibe coding" thing enough and not allowing AI agents to fully run wild.

I will concede that Claude and Cursor have gotten quite good at frontend web development generation. I don't doubt that there are a lot of tasks where they make sense.

However, I still have yet to see a _single_ example of any of these tools working for my domain. Every single case, even when the folks who are trumpeting the tools internally run the prompting/etc, results in catastrophic failure.

The ones people trumpet internally are cases where folks can't be bothered to learn the libraries they're working with.

The real issue is that people who aren't deeply familiar with the domain don't notice the problems with the changes LLMs make. They _seem_ reasonable. Essentially by definition.

Despite this, we are being nearly forced to use AI tooling on critical production scientific computing code. I have been told I should never be editing code directly and been told I must use AI tooling by various higher level execs and managers. Doing so is 10x to 100x slower than making changes directly. I don't have boilerplate. I do care about knowing what things do because I need to communicate that to customers and predict how changes to parameters will affect output.

I keep hearing things described as an "overactive intern", but I've never seen an intern this bad, and I've seen a _lot_ of interns. Interns don't make 1000 line changes that wreck core parts of the codebase despite being told to leave that part alone. Interns are willing to validate the underlying mathematical approximations to the physics and are capable of accurately reasoning about how different approximations will affect the output. Interns understand what the result of the pipeline will be used for and can communicate that in simple terms or more complex terms to customers. (You'd think this is what LLMs would be good at, but holy crap do they hallucinate when working with scientific terminology and jargon.)

Interns have PhDs (or in some cases, are still in grad school, but close to completion). They just don't have much software engineering experience yet. Maybe that's the ideal customer base for some of these LLM/AI code generation strategies, but those tools seem especially bad in the scientific computing domain.

My bottleneck isn't how fast I can type. My bottleneck is explaining to a customer how our data processing will affect their analysis.

(To our CEO) - Stop forcing us to use the wrong tools for our jobs.

(To the rest of the world) - Maybe I'm wrong and just being a luddite, but I haven't seem results that live up to the hype yet, especially within the scientific computing world.

jofer commented on CCTV footage captures video of an earthquake fault in motion   smithsonianmag.com/smart-... · Posted by u/chrononaut
v3ss0n · a month ago
4.x l to 5.x earthquakes are still happening a few times a week and the area couldn't recover from disaster. last week, one 4 stories building next to my friend house collapsed,near Mandalay.

Does that mean Myanmar is now an active zone?

jofer · a month ago
It's always been active. The Sagaing fault is a plate boundary. You're seeing the "side" of the Indian subcontinent slamming northward into the Eurasian plate.
jofer commented on Why you can't color calibrate deep space photos   maurycyz.com/misc/cc/... · Posted by u/LorenDB
jofer · a month ago
These same things apply to satellite images of the Earth as well. Even when you have optical bands that roughly correspond to human eye sensitivity, they're a quite different response pattern. You're also often not working with those wavelength bands in the visualizations you make.

Scientific sensors want as "square" a spectral response as possible. That's quite different than human eye response. Getting a realistic RGB visualization from a sensor is very much an artform.

jofer commented on Why you can't color calibrate deep space photos   maurycyz.com/misc/cc/... · Posted by u/LorenDB
Retr0id · a month ago
The next space mission should be to leave a colour calibration chart on the moon.
jofer · a month ago
The moon itself already is one. Moonshots are widely used in calibration, at least for earth observation satellites. The brightness of the full moon at each wavelength at each day of the year is predictable and well-known, so it makes a good target to check your payload against.
jofer commented on Why you can't color calibrate deep space photos   maurycyz.com/misc/cc/... · Posted by u/LorenDB
klysm · a month ago
Recently I've been on a bit of a deep dive regarding human color vision and cameras. This left me with the general impression that RGB bayer filters are vastly over-utilized (mostly due to market share), and are they are usually not great for tasks other than mimicking human vision! For example, if you have a stationary scene, why not put a whole bunch of filters in front of a mono camera and get much more frequency information?
jofer · a month ago
In case you weren't already aware, that last bit basically describes most optical scientific imaging (e.g. satellite imaging or spectroscopy in general).
jofer commented on Would You Like an IDOR With That? Leaking 64m McDonald's Job Applications   ian.sh/mcdonalds... · Posted by u/samwcurry
ryandrake · 2 months ago
> The personality test was a disturbing experience powered by Traitify.com where we were asked if phrases like “enjoys overtime” are either Me or Not Me. It was simple to guess that we should probably select Me for the pro-employer questions and Not Me for questions referencing being argumentative or aggressive, but it was still quite strange.

Offtopic from the security issue, but I wonder if they really get any value out of this "Personality test." It seems like it's just a CAPTCHA that makes sure the applicant knows when to lie correctly.

jofer · 2 months ago
Similar tests have been standard for over 20 years. When I worked at McDonald's (late 90's), they didn't do the personality test, but when I applied across the street at Arby's a few years later, they did.

The one that I just got annoyed with and decided it wasn't worth switching from McD's to Arby's was "would you rather read a book or talk to a person?". I mean, I get it, they want people-focused-people, but being introverted and/or just liking books doesn't mean you can't give excellent customer service.

Sure, it's easy to guess what want most of the time, but the fact that personality tests are as widespread as they are in employment is maddening.

Many years later I worked at Chevron (upstream as an exploration geologist -- not a gas station). While they didn't do it as part of the application process, you were required to take a personality/communication style test when you started (ecolors). That's all well and good (it _is_ very useful to understand personalities for communication styles), but in a lot of roles you literally had to wear the colors on your badge. If you wanted to go into management, you essentially had to score "red over yellow". "Greens" and "blues" were considered to be limited to technical roles and were explicitly not given opportunities to advance, though it took a long time to realize that. I started out thinking "hey, this is actually practical" and then over a few years went to "oh, they're using this to decide who moves up... That's a problem". I asked folks and was told by my manager's manager that ecolors were explicitly used in advancement criteria and who got opportunities to lead projects/etc. That's around the time I left. I hear they've dialed that particular bit back a lot, but it's still very weird to me that it's considered a normal and acceptable practice.

u/jofer

KarmaCake day4000July 2, 2010
About
I'm a geologist (or a geophysist, take your pick). These days I get to work on some really neat remote sensing and image processing problems at Planet. SLC-based, formerly Houston, TX.

meet.hn/city/40.7596198,-111.886797/Salt-Lake-City

Socials:

- bsky.app/profile/joferkington

- github.com/joferkington

---

View Original