Readit News logoReadit News
corysama commented on VHS-C: When a lazy idea stumbles towards perfection [video]   youtube.com/watch?v=HFYWH... · Posted by u/surprisetalk
FirmwareBurner · 4 days ago
I can also recommend:

  VWestlife
  This Does Not Compute
  Michael MJD
  Tech Tangents
  Janus Cycle
  LGR
  Posy
  Cathode Ray Dude

corysama · 4 days ago
If the idea of just chilling out and appreciating old tech with a slick presentation sounds good to you, you might like https://youtube.com/@PosyMusic
corysama commented on I forced every engineer to take sales calls and they rewrote our platform   old.reddit.com/r/Entrepre... · Posted by u/bilsbie
swader999 · 5 days ago
My best projects have been where I code side by side with the actual users or subject matter experts. Built a small business loan approval app for a bank, sat right beside the underwriters. Airport billing system, worked one door down from accounting. They came to standup everyday, you take breaks with them, gradually they feel like they own the product.
corysama · 5 days ago
Ideally, folks would practice mob programming that includes rotating customers into the room in addition to a designer and a product manager. But, so many engineers have had bad experiences with pair programming that they allergically jump to making strawman arguments against even considering mob programming.

Ex: It doesn't require you to be forced into doing it 24/7 for everything. You can still do the vast majority of your work alone in your cave.

corysama commented on How to Think About GPUs   jax-ml.github.io/scaling-... · Posted by u/alphabetting
pklausler · 6 days ago
So it's a "SIMD lane" that can itself perform actual SIMD instructions?

I think you want a metaphor that doesn't also depend on its literal meaning.

corysama · 6 days ago
Nvidia’s marketing team uses confusing terminology to make their product sound cooler than it is.

An Intel “core” can perform AVX512 SIMD instructions that involve 16 lanes of 32-bit data. Intel cores are packaged in groups of up to 16. And, they use hyperthreading, speculative execution and shadow registers to cover latency.

An Nvidia “Streaming Multiprocessor” can perform SIMD instructions on 32 lanes of 32-bits each. Nvidia calls these lanes “cores” to make it feel like one GPU can compete with thousands of Intel CPUs.

Simpler terminology would be: an Nvidia H100 has 114 SM Cores, each with four 32-wide SIMD execution units (where basic instructions have a latency of 4 cycles) and each with four Tensor cores. That’s a lot more capability than a high-end Intel CPUs, but not 14,592 times more.

The CUDA API presents a “CUDA Core” (single SIMD lane) as if it was a thread. But, for most purposes it is actually a single SIMD lane in the 32-wide “Warp”. Lots of caveats apply in the details though.

corysama commented on I made a real-time C/C++/Rust build visualizer   danielchasehooper.com/pos... · Posted by u/dhooper
corysama · 12 days ago
Looks like a general `fork()` visualizer to me. Which is great!
corysama commented on Don't “let it crash”, let it heal   zachdaniel.dev/p/elixir-m... · Posted by u/ahamez
HexDecOctBin · 17 days ago
How does restarting the process fix the crash? If the process crashed because a file was missing, it will still be missing when the process is restarted. Is an infinite crash-loop considered success in Erlang?
corysama · 16 days ago
I’m only an armchair expert on Erlang. But, having looked into it repeatedly for a couple decades, my take-away is the “Let it crash” slogan is good. But, also presented a bit out of context. Or, at least assuming context that most people don’t have.

Erlang is used in situations involving a zillion incoming requests. If an individual request fails… Maybe it was important. Maybe it wasn’t. If it was important, it’s expected they’ll try again. What’s most important is that the rest of the requests are not interrupted.

What makes Erlang different is that it is natural and trivial to be able to shut down an individual request on the event of an error without worrying about putting any other part of the system into a bad state.

You can pull this off in other languages via careful attention to the details of your request-handling code. But, the creators of the Erlang language and foundational frameworks have set their users up for success via careful attention to the design of the system as a whole.

That’s great in the contexts in which Erlang is used. But, in the context of a Java desktop app like Open Office, it’s more like saying “Let it throw”. “It” being some user action. And, the slogan being to have a language and framework with such robust exception handling built-in that error handling becomes trivial and nearly invisible.

corysama commented on The surprise deprecation of GPT-4o for ChatGPT consumers   simonwillison.net/2025/Au... · Posted by u/tosh
AlecSchueler · 18 days ago
If you're working on a rented GPU are you still doing local work? Or do you mean literally lending out the hardware?
corysama · 18 days ago
Working on a rented GPU would not be local. But, renting a low-end GPU might be cheap enough to use for hobbyist creative work. I'm just musing on lots of different routes to make hobby AI use economically feasible.
corysama commented on The surprise deprecation of GPT-4o for ChatGPT consumers   simonwillison.net/2025/Au... · Posted by u/tosh
michaelbrave · 18 days ago
I've seen quite a bit of this too, the other thing I'm seeing on reddit is I guess a lot of people really liked 4.5 for things like worldbuilding or other creative tasks, so a lot of them are upset as well.
corysama · 18 days ago
There is certainly a market/hobby opportunity for "discount AI" for no-revenue creative tasks. A lot of r/LocalLLaMA/ is focused on that area and in squeezing the best results out of limited hardware. Local is great if you already have a 24 GB gaming GPU. But, maybe there's an opportunity for renting out low power GPUs for casual creative work. Or, an opportunity for a RenderToken-like community of GPU sharing.
corysama commented on The surprise deprecation of GPT-4o for ChatGPT consumers   simonwillison.net/2025/Au... · Posted by u/tosh
andy99 · 18 days ago
Edit to add: according to Sam Altman in the reddit AMA they un-deprecated it based on popular demand. https://old.reddit.com/r/ChatGPT/comments/1mkae1l/gpt5_ama_w...

I wonder how much of the '5 release was about cutting costs vs making it outwardly better. I'm speculating that one reason they'd deprecate older models is because 5 materially cheaper to run?

Would have been better to just jack up the price on the others. For companies that extensively test the apps they're building (which should be everyone) swapping out a model is a lot of work.

corysama · 18 days ago
The vibe I'm getting from the Reddit community is that 5 is much less "Let's have a nice conversation for hours and hours" and much more "Let's get you a curt, targeted answer quickly."

So, good for professionals who want to spend lots of money on AI to be more efficient at their jobs. And, bad for casuals who want to spend as little money as possible to use lots of datacenter time as their artificial buddy/therapist.

corysama commented on C++26 Reflections adventures and compile-time UML   reachablecode.com/2025/07... · Posted by u/ibobev
mpyne · 24 days ago
I was literally running into something a couple of days ago on my toy C++ project where basic compile-time reflection would have been nice to have for some sanity checking.

And even if it's true that some things can be done already with specific compilers and implementation-specific hacks, it would be really nice to be able to do those things more straightforwardly.

My experience with C++ changes has been that the recent additions to compile-time metaprogramming operations is that they improve compile times rather than make it worse, because you don't have to do things like std::enable_if<> hacks and recursive templates to do things that a simple generic lambda or constexpr conditional will do, which are more difficult for both you and the compiler.

corysama · 23 days ago
The history of C++ has been one long loop of:

1. So many necessary common practices of C++ are far too complicated!

2. Std committee adds features to make those practices simpler.

3. C++ keeps adding features. It’s too big. They should cut out the old stuff!

4. The std committee points at the decade-long Python 3 fiasco.

5. Repeat.

corysama commented on Live coding interviews measure stress, not coding skills   hadid.dev/posts/living-co... · Posted by u/mustaphah
add-sub-mul-div · 25 days ago
Once in an interview I asked someone to go to the whiteboard for a coding exercise and her body language showed an enthusiasm and fearlessness that in hindsight I realized I'd never seen before. She practically sprung up out of the chair.

Most people, even the good ones, show a little hesitance when starting. Which isn't really necessary, most people do fine. I'm not trying to get them to fail, I'm trying to get them to succeed. I want to see if they're smart and understand the problem and the direction of a solution. Not if they miss any semicolons or don't recall some arcane data structure.

She was one of the best hires I made. Coding interviews can also measure attitude and confidence.

corysama · 25 days ago
Once in a coding interview, someone asked the candidate to go to the whiteboard and the poor guy was so flustered he couldn’t remember how many bits were in a byte.

He was a perfectly intelligent programmer and a fine person. I have no doubt at all he understood the details of bits and bytes. But, that group interview session did not sufficiently manage the stress level of the process. And, so we probably missed out on a perfectly good hire.

That and similar experiences in other group interviews are why my 1:1 interviews are structured around keeping the stress level low and preventing the candidate from freezing up.

u/corysama

KarmaCake day11208October 4, 2008View Original