Readit News logoReadit News
turtletontine commented on Economics of Orbital vs. Terrestrial Data Centers   andrewmccalip.com/space-d... · Posted by u/flinner
zozbot234 · 2 days ago
Will these space-based data centers run on rad-hard silicon (which is dog slow compared to anything on Earth) or just silently accept wrong results, hardware lockups and permanent failure due to the harsh space environment? Will they cool that hardware with special über-expensive high-temperature Peltiers that heat the radiators up to visible incandescence so that the heat can be shed with any efficiency? There's zillions of those issues. The whole idea is just bonkers.
turtletontine · a day ago
> Will…

I think “won’t”. I could be wrong of course, but I imagine efforts to put servers into orbit will die before anything is launched. It’s just a bad idea. Maybe a few grifters will make bank taking suckers’ money before it becomes common knowledge that this is stupid, but I will be genuinely surprised if real servers with GPUs are launched.

I don’t mean to be facetious here. But saying “will” is treating it as inevitable that this will happen, which is how the grifters win.

turtletontine commented on A quarter of US-trained scientists eventually leave   arxiv.org/abs/2512.11146... · Posted by u/bikenaga
MostlyStable · 2 days ago
If the culture normalized such that a much larger proportion of research was conducted by permanent, non-faculty, research employees, this would both reduce the need for so many students and increase the jobs available for students, and create a new employment niche with a different balance of teaching/administration/research. It would basically be turning "post doc" into an actual career rather than a stop over.

This would be better for everyone involved, at the admitted cost of being quite a bit more expensive. My guess is that the market would naturally converge on this equilibrium if the information of job placement rates on a per-program (or even per lab/advisor) were more readily available.

turtletontine · 2 days ago
What you’re describing sounds a lot like the Department of Energy national labs. They have (or had) many permanent-track research roles without teaching obligations, where scientists can have long stable research careers.

The problem, as always, is funding. In the US, the federal govt is essentially the only “customer” of basic research. There’s some private funding, often from kooky millionaires who want someone to invent a time machine, but it’s the exception that proves the rule. Universities sometimes have pure research roles, but they’re generally dependent on the employee paying themselves with a constant stream of grants. It’s a stressful and precarious position.

Deleted Comment

turtletontine commented on Has the cost of building software dropped 90%?   martinalderson.com/posts/... · Posted by u/martinald
kalx · 8 days ago
Let’s say you’re right. Do we still want to, though? I mean. At some point we will no longer have the skill to babysit the AI agent.
turtletontine · 8 days ago
When the LLM code bases are too complex for the humans on deck to understand and debug… that sounds like the turning point when companies go back to real developers IMO. Any serious mission critical code needs knowledgeable humans on deck who can leap into again when s** hits the fan, put out fires and patch critical bugs.
turtletontine commented on Netflix’s AV1 Journey: From Android to TVs and Beyond   netflixtechblog.com/av1-n... · Posted by u/CharlesW
hombre_fatal · 12 days ago
Not sure how it works on Android, but it's such amateur UX on Apple's part.

99.9% of people expect HDR content to get capped / tone-mapped to their display's brightness setting.

That way, HDR content is just magically better. I think this is already how HDR works on non-HDR displays?

For the 0.01% of people who want something different, it should be a toggle.

Unfortunately I think this is either (A) amateur enshittification like with their keyboards 10 years ago, or (B) Apple specifically likes how it works since it forces you to see their "XDR tech" even though it's a horrible experience day to day.

turtletontine · 12 days ago
99% of people have no clue what “HDR” and “tone-mapping” mean, but yes are probably weirded out by some videos being randomly way brighter than everything else
turtletontine commented on We gave 5 LLMs $100K to trade stocks for 8 months   aitradearena.com/research... · Posted by u/cheeseblubber
gwd · 13 days ago
The summary to me is here:

> Almost all the models had a tech-heavy portfolio which led them to do well. Gemini ended up in last place since it was the only one that had a large portfolio of non-tech stocks.

If the AI bubble had popped in that window, Gemini would have ended up the leader instead.

turtletontine · 13 days ago
Yup. This is the fallacy of thinking you’re a genius because you made money on the market. Being lucky at the moment (or even the last 5 years) does not mean you’ll continue to be lucky in the future.

“Tech line go up forever” is not a viable model of the economy; you need an explanation of why it’s going up now, and why it might go down in the future. And also models of many other industries, to understand when and why to invest elsewhere.

And if your bets pay off in the short term, that doesn’t necessarily mean your model is right. You could have chosen the right stocks for the wrong reasons! Past performance doesn’t guarantee future performance.

turtletontine commented on OpenAI declares 'code red' as Google catches up in AI race   theverge.com/news/836212/... · Posted by u/goplayoutside
turnsout · 15 days ago
Absolutely. I don't understand why investors are excited about getting into a negative-margin commodity. It makes zero sense.

I was an OpenAI fan from GPT 3 to 4, but then Claude pulled ahead. Now Gemini is great as well, especially at analyzing long documents or entire codebases. I use a combination of all three (OpenAI, Anthropic & Google) with absolutely zero loyalty.

I think the AGI true believers see it as a winner-takes-all market as soon as someone hits the magical AGI threshold, but I'm not convinced. It sounds like the nuclear lobby's claims that they would make electricity "too cheap to meter."

turtletontine · 15 days ago
> I don't understand why investors are excited about getting into a negative-margin commodity. It makes zero sense.

Long term, yes. But Wall Street does not think long term. Short or medium term, you just need to cash out to the next sucker in line before the bubble pops, and there are fortunes to be made!

turtletontine commented on Zig's new plan for asynchronous programs   lwn.net/SubscriberLink/10... · Posted by u/messe
messe · 15 days ago
> Every struct or bare fn now needs (2) fields/parameters by default.

Storing interfaces a field in structs is becoming a bit of an an anti-pattern in Zig. There are still use cases for it, but you should think twice about it being your go-to strategy. There's been a recent shift in the standard library toward "unmanaged" containers, which don't store a copy of the Allocator interface, and instead Allocators are passed to any member function that allocates.

Previously, one would write:

    var list: std.ArrayList(u32) = .init(allocator);
    defer list.deinit();
    for (0..count) |i| {
        try list.append(i);
    }
Now, it's:

    var list: std.ArrayList(u32) = .empty;
    defer list.deinit(allocator);
    for (0..count) |i| {
        try list.append(allocator, i);
    }
Or better yet:

    var list: std.ArrayList(u32) = .empty;
    defer list.deinit(allocator);
    try list.ensureUnusedCapacity(allocator, count); // Allocate up front
    for (0..count) |i| {
        list.appendAssumeCapacity(i); // No try or allocator necessary here
    }

turtletontine · 15 days ago
I’m not sure I see how each example improves on the previous (though granted, I don’t really know Zig).

What happens if you call append() with two different allocators? Or if you deinit() with a different allocator than the one that actually handled the memory?

turtletontine commented on AI Adoption Rates Starting to Flatten Out   apolloacademy.com/ai-adop... · Posted by u/toomuchtodo
mwkaufma · 19 days ago
Aside from financially-motivated "testimonials," there's no broad evidence that it even works that well for coding, with many studies even showing the opposite. Damning with faint praise.
turtletontine · 19 days ago
I think what’s clear is many people feel much more productive coding with LLMs, but perceived and actual productivity don’t necessarily correlate. I’m sure results vary quite a bit.

My hunch is that long term value might be quite low: a few years into vibe coding huge projects, developers might hit a wall with a mountain of slop code they can no longer manage or understand. There was an article here recently titled “vibe code is legacy code” which made a similar argument. Again, results surely vary wildly

turtletontine commented on Apple and Intel Rumored to Partner on Mac Chips   macrumors.com/2025/11/28/... · Posted by u/bigyabai
bigyabai · 19 days ago
AFAIK Intel Foundry Services are the only product they can't find big customers for. Apple would be the first if they move past the sampling phase.
turtletontine · 19 days ago
They’ve already secured Microsoft as a customer, they’ll be making the next Maia AI accelerator for Azure on 18A. Apple would be a much bigger catch for sure, but they have in fact secured one big customer.

u/turtletontine

KarmaCake day500July 28, 2016View Original