Readit News logoReadit News
skoocda commented on AI Is Being Misunderstood as a Breakthrough in Planning. It's Not   warontherocks.com/2026/02... · Posted by u/skoocda
skoocda · 7 days ago
While the article is oriented towards defense planning, many of these points apply to any sufficiently complex software engineering project.

Generating several "competing constructs" used to be a wanton misappropriation of resources, but now it's not only viable... it's cheap. Comparing these constructs, however, has not necessarily become easier.

Leaders need to avoid using the base appearance of competence to accept any given plan (as they will all appear internally coherent), and instead spend more effort determining which difficult questions have been left unasked. As we learned from HHGttG, the answer is the easy part.

skoocda commented on Anthropic Cowork feature creates 10GB VM bundle on macOS without warning   github.com/anthropics/cla... · Posted by u/mystcb
swyx · 13 days ago
what would you use it for?
skoocda · 13 days ago
Not OP, but having the exact VM spec your agent runs on is useful for testing. I want to make sure my code works perfectly on any ephemeral environments an agent uses for tasks, because otherwise the agent might invent some sort of degenerate build and then review against that. Seen it happen many times on Codex web.
skoocda commented on Can a Computer Science Student Be Taught to Design Hardware?   semiengineering.com/can-a... · Posted by u/stn8188
skoocda · 25 days ago
I have a degree in EE (2016) and am doing mostly ML engineering with a considerable amount of SWE tasks in my day-to-day.

Of my graduating class, very few are designing hardware. Most are writing code in one form or another. There were very few jobs available in EE that didn't underpay and lock you into an antiquated skillset, whether in renewables/MRI/nuclear/control etc.

We had enough exposure to emerging growth areas (computer vision, reinforcement learning, GPUs) to learn useful skills, and those all had free and open source systems to study after graduation, unlike chip design.

The company sponsoring this article is a contributor to that status quo. The complete lack of grassroots support for custom chips in North America, including a dearth of open source design tools or a community around them, has made it a complete non-starter for upskilling. Nobody graduates from an EE undergrad with real capability in the chip design field, so unless you did graduate studies, you probably just ended up learning more and more software skills.

But the relentless off-shoring of hardware manufacturing is likely the ultimate cause. These days, most interesting EE roles I see require fluency in Mandarin.

skoocda commented on The Adolescence of Technology   darioamodei.com/essay/the... · Posted by u/jasondavies
skoocda · 2 months ago
The one thing I really disagree with is the notion that there will be millions of identical AI images.

The next big step is continual learning, which enables long-term adaptive planning and "re-training" during deployment. AI with continual learning will have a larger portion of their physical deployment devoted to the unique memories they developed via individual experiences. The line between history/input context/training corpus will be blurred and deployed agents will go down long paths of self-differentiation via choosing what to train themselves on; eventually we'll end up with a diaspora of uniquely adapted agents.

Right now inference consists of one massive set of weights and biases duplicated for every consumer and a tiny unique memory file that gets loaded in as context to "remind" the AI of the experiences it had (or did it?) with this one user / deployment. Clearly, this is cheap and useful to scale up initially but nobody wants to spend the rest of their life with an agent that is just a commodity image.

In the future, I think we'll realize that adding more encyclopedic knowledge is not a net benefit for most common agents (but we will provide access to niche knowledge behind "domain-specific" gates, like an MoE model but possibly via MCP call), and instead allocate a lot more physical capacity to storing and processing individualized knowledge. Agents will slow down on becoming more book smart, but will become more street smart. Whether or not this "street smart" knowledge ever gets relayed back to a central corpora is probably mostly dependent on the incentives for the agent.

Certainly my biggest challenge after a year of developing an industrial R&D project with AI assistance is that it needs way, way more than 400k tokens of context to understand the project properly. The emerging knowledge graph tools are a step in the right direction, certainly, but they're not nearly integrated enough. From my perspective, we're facing a fundamental limitation: as long as we're on the Transformers architecture with O(n^2) attention scaling, I will never get a sufficiently contextualized model response. Period.

You might notice this yourself if you ask Claude 4.5 (knowledge cutoff Jan 2025) to ramp up on geopolitical topics over the past year. It is just not physically possible in 400k tokens. Architectures like Mamba or HOPE or Sutton's OAK may eventually fix this, and we'll see a long-term future resembling Excession; where individual agents develop in enormously different ways, even if they came from the same base image.

skoocda commented on Rust Atomics and Locks (2023)   marabos.nl/atomics/... · Posted by u/0xedb
skoocda · 2 years ago
I've done a precursory skim of this and plan to start reading it in earnest next week. Looks comprehensive and accessible. Very excited.
skoocda commented on Dear AI Companies, instead of scraping OpenStreetMap, how about a $10k donation?   en.osm.town/@Firefishy/11... · Posted by u/RicoElectrico
yifanl · 2 years ago
You can rather easily set up semi-hard rate limiting with a proof of work scheme. Will very trivially affect human users, while bot spammers have to eat up the cost of a million hash reversions per hour or whatever.
skoocda · 2 years ago
e.g. HashCash

Deleted Comment

skoocda commented on A hydrogen-powered air taxi flew 523 miles emitting only water vapor   popsci.com/technology/hyd... · Posted by u/geox
jfengel · 2 years ago
If you had all of that H2, what is the additional cost to just fix some CO2 into hydrocarbons?

It feels like that would be a much simpler way to get to net zero than having to reinvent all of the infrastructure.

So much simpler that I wonder why anyone would keep trying on hydrogen. Which makes me darkly suspect that the goal is to take our attention off the solution that's already being deployed, i.e. wind and solar.

skoocda · 2 years ago
Regardless of net efficiency, that still entails collecting CO2 at a central facility (where it could have been dealt with in other ways, such as injection underground) and sprinkling it through the air as you fly over delicate ecosystems. I'm sure bankers see both as net zero, but condors might have more issues with your simpler workaround.
skoocda commented on A hydrogen-powered air taxi flew 523 miles emitting only water vapor   popsci.com/technology/hyd... · Posted by u/geox
briandw · 2 years ago
Cost to produce H2 from water, 50-55 kWh/kg Cost to liquefy H2 is 10-13 kWh/kg 1 Kg of H2 stores about 33 kWh of energy. More than 50% of the energy is wasted before transport, storage, boil off etc are concerned.

H2 does not make any sense whatever.

skoocda · 2 years ago
That gravimetric energy density is about 2 orders of magnitude higher than lithium ion batteries.

u/skoocda

KarmaCake day240June 5, 2014
About
ex-founder of Mosaic Manufacturing Ltd.

ex-founder of Spreza Technologies Co.

Xoogler.

Currently doing R&D at SDI

View Original