Readit News logoReadit News
throwaway31131 commented on Has the cost of building software dropped 90%?   martinalderson.com/posts/... · Posted by u/martinald
averageRoyalty · 2 months ago
I read these sort of comments every so often and I do not understand them. You are in a sea of people telling you that they are developing software much quicker which ticks the required boxes. I understand that for some reason this isn't the case for your work flow, but obviously it has a lot more value for others.

If you are a chairmaker and everyone gains access to a machine that can spit out all the chair components but sometimes only spits out 3 legs or makes a mistake on the backs, you might find it pointless. Maybe it can't do all the nice artisan styles you can do. But you can be confident others will take advantage of this chair machine, work around the issues and drive the price down from $20 per chair to $2 per chair. In 24 months, you won't be able to sell enough of your chairs any more.

throwaway31131 · 2 months ago
Maybe, or maybe the size of the chair market grows because with $2 chairs more buyers enter. The high end is roughly unaffected because they were never going to buy a low end chair.
throwaway31131 commented on Has the cost of building software dropped 90%?   martinalderson.com/posts/... · Posted by u/martinald
qwertyastronaut · 2 months ago
I don’t know if it’s 90%, but I’m shipping in 2 days things that took 2-4 weeks before.

Opus 4.5 in particular has been a profound shift. I’m not sure how software dev as a career survives this. I have nearly 0 reason to hire a developer for my company because I just write a spec and Claude does it in one shot.

It’s honestly scary, and I hope my company doesn’t fail because as a developer I’m fucked. But… statistically my business will fail.

I think in a few years there will only be a handful of software companies—the ones who already have control of distribution. Products can be cloned in a few weeks now; not long until it’s a few minutes. I used to see a new competitor once every six months. Now I see a new competitor every few hours.

throwaway31131 · 2 months ago
Just out of curiosity, what software product were you making in two weeks before using AI? Or maybe I’m misunderstanding your use of shipping.
throwaway31131 commented on What the heck is going on at Apple?   cnn.com/2025/12/06/tech/a... · Posted by u/methuselah_in
raw_anon_1111 · 2 months ago
Apple seems to have an AI strategy - throw $1 billion at Google for its models.
throwaway31131 · 2 months ago
And if you believe the numbers from the press on Google’s AI spending, that’s an amazing deal.

https://www.indiatoday.in/technology/news/story/google-ai-bo...

throwaway31131 commented on What the heck is going on at Apple?   cnn.com/2025/12/06/tech/a... · Posted by u/methuselah_in
soared · 2 months ago
This is a good point, technically chromebooks fit this definition?
throwaway31131 · 2 months ago
I guess we’re being a bit vague on timeframe but chrome books launched in 2011 so they’re one of those products that took ~10 years to be an overnight success, with 2020 being an accelerant. So my vote is no.
throwaway31131 commented on How elites could shape mass preferences as AI reduces persuasion costs   arxiv.org/abs/2512.04047... · Posted by u/50kIters
pjc50 · 2 months ago
> The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.

Indeed. It's always been clear to me that the "AI risk" people are looking in the wrong direction. All the AI risks are human risks, because we haven't solved "human alignment". An AI that's perfectly obedient to humans is still a huge risk when used as a force multiplier by a malevolent human. Any ""safeguards"" can easily be defeated with the Ender's Game approach.

throwaway31131 · 2 months ago
What’s the “Ender’s Game Approach “? I’ve read the book but I’m not sure which part you’re referring to.
throwaway31131 commented on OpenAI declares 'code red' as Google catches up in AI race   theverge.com/news/836212/... · Posted by u/goplayoutside
threeducks · 2 months ago
I just tried it with GPT-5.1-Codex. The compression ratio is not amazing, so not sure if it really worked, but at least it ran without errors.

A few ideas how to make it work for you:

1. You gave a link to a PDF, but you did not describe how you provided the content of the PDF to the model. It might only have read the text with something like pdftotext, which for this PDF results in a garbled mess. It is safer to convert the pages to PNG (e.g. with pdftoppm) and let the model read it from the pages. A prompt like "Transcribe these pages as markdown." should be sufficient. If you can not see what the model did, there is a chance it made things up.

2. You used C++, but Python is much easier to write. You can tell the model to translate the code to C++ once it works in Python.

3. Tell the model to write unit tests to verify that the individual components work as intended.

4. Use Agent Mode and tell the model to print something and to judge whether the output is sensible, so it can debug the code.

throwaway31131 · 2 months ago
Interesting. Thanks for the suggestions.
throwaway31131 commented on OpenAI declares 'code red' as Google catches up in AI race   theverge.com/news/836212/... · Posted by u/goplayoutside
jpalomaki · 2 months ago
Can you give some concrete example of programming problem task GPT fails to solve?

Interested, because I’ve been getting pretty good results with different tasks using the Codex.

throwaway31131 · 2 months ago
I posted this example before but academic papers on algorithms often have pseudo code but no actual code.

I thought it would be handy to use AI to make the code from the paper so a few months ago I tried to use Claude (not GPT, because I only have access to Claude) to recreate C++ code to implement the algorithms in this paper as practice for me in LLM use and it didn’t go well.

https://users.cs.duke.edu/~reif/paper/chen/graph/graph.pdf

throwaway31131 commented on IBM CEO says there is 'no way' spending on AI data centers will pay off   businessinsider.com/ibm-c... · Posted by u/nabla9
myaccountonhn · 2 months ago
> In an October letter to the White House's Office of Science and Technology Policy, OpenAI CEO Sam Altman recommended that the US add 100 gigawatts in energy capacity every year.

> Krishna also referenced the depreciation of the AI chips inside data centers as another factor: "You've got to use it all in five years because at that point, you've got to throw it away and refill it," he said.

And people think the climate concerns of AI are overblown. Currently US has ~1300 GW of energy capacity. That's a huge increase each year.

throwaway31131 · 2 months ago
100GW per year is not going to happen.

The largest plant in the world is the Three Gorges Dam in China at 22GW and it’s off the scales huge. We’re not building the equivalent of four of those every year.

Unless the plan is to power it off Sam Altman’s hot air. That could work. :)

https://en.wikipedia.org/wiki/List_of_largest_power_stations

throwaway31131 commented on IBM CEO says there is 'no way' spending on AI data centers will pay off   businessinsider.com/ibm-c... · Posted by u/nabla9
milesvp · 2 months ago
Worse, is that a lot of these people are acting like Moore's law isn't still in effect. People conflate clock speeds on beefy hardware with moore's law, and act like it's dead, when transistor density rises, and cost per transistor continue to fall at rates similar to what they always have. That means the people racing to build out infrastructure today might just be better off parking that money in a low interest account, and waiting 6 months. That was a valid strategy for animation studios in the late 90s (it was not only cheaper to wait, but also the finished renders happened sooner), and I'd be surprised if it's not a valid strategy today for LLMs. The amount of silicon that is going to be produced that is specialized for this type of processing is going to be mind boggling.
throwaway31131 · 2 months ago
Cost per transistor is increasing. or flat, if you stay on a legacy node. They pretty much squeezed all the cost out of 28nm that can be had, and it’s the cheapest per transistor.

“based on the graph presented by Milind Shah from Google at the industry tradeshow IEDM, the cost of 100 million transistors normalized to 28nm is actually flat or even increasing.”

https://www.tomshardware.com/tech-industry/manufacturing/chi...

u/throwaway31131

KarmaCake day332April 6, 2025View Original