Readit News logoReadit News
crazylogger commented on Code is cheap. Show me the talk   nadh.in/blog/code-is-chea... · Posted by u/ghostfoxgod
Waterluvian · 14 days ago
I think if your job is to assemble a segment of a car based on a spec using provided tools and pre-trained processes, it makes sense if you worry that giant robot arms might be installed to replace you.

But if your job is to assemble a car in order to explore what modifications to make to the design, experiment with a single prototype, and determine how to program those robot arms, you’re probably not thinking about the risk of being automated.

I know a lot of counter arguments are a form of, “but AI is automating that second class of job!” But I just really haven’t seen that at all. What I have seen is a misclassification of the former as the latter.

crazylogger · 14 days ago
You are describing tradition (deterministic?) automation before AI. With AI systems as general as today's SOTA LLMs, they'll happily take on the job regardless of the task falling into class I or class II.

Ask a robot arm "how should we improve our car design this year", it'll certainly get stuck. Ask an AI, it'll give you a real opinion that's at least on par with a human's opinion. If a company builds enough tooling to complete the "AI comes up with idea -> AI designs prototype -> AI robot physically builds the car -> AI robot test drives the car -> AI evaluates all prototypes and confirms next year's design" feedback loop, then theoretically this definitely can work.

This is why AI is seen as such a big deal - it's fundamentally different from all previous technologies. To an AI, there is no line that would distinguish class I from II.

crazylogger commented on TimeCapsuleLLM: LLM trained only on data from 1800-1875   github.com/haykgrigo3/Tim... · Posted by u/admp
root_axis · a month ago
I think it would raise some interesting questions, but if it did yield anything noteworthy, the biggest question would be why that LLM is capable of pioneering scientific advancements and none of the modern ones are.
crazylogger · a month ago
Or maybe, LLMs are pioneering scientific advancements - people are using LLMs to read papers, choose what problems to work on, come up with experiments, analyze results, and draft papers, etc., at this very moment. Except they eventually stick their human names on the cover so we almost never know.
crazylogger commented on Claude Code CLI was broken   github.com/anthropics/cla... · Posted by u/sneilan1
zozbot234 · a month ago
Nobody cares how the code looks, this is not an art project. But we certainly care if the code looks totally unmaintainable, which vibe-coded slop absolutely does.
crazylogger · a month ago
Proper vibe coding should involves tons of vibe refactoring.

I'd say spending at least a quarter of my vibe coding time on refactoring + documentation refresh to ensure the codebase looking impeccable is the only way my projects can work at all long term. We don't want to confuse the coding agent.

crazylogger commented on GPT-5.2-Codex   openai.com/index/introduc... · Posted by u/meetpateltech
wahnfrieden · 2 months ago
Because the best value is from the subscription where the price is stable
crazylogger · 2 months ago
From a couple hours of usage in the CLI, 5.2-codex seems to burn through my plan's limit noticeably faster than 5.1-codex. So I guess the usage limit is a set dollar amount of API credits under the hood.
crazylogger commented on Structured outputs on the Claude Developer Platform   claude.com/blog/structure... · Posted by u/adocomplete
sails · 3 months ago
Agree, it feels so fundamental. Any idea why? Gemini has also had it for a long time
crazylogger · 3 months ago
The way you get structured output with Claude prior to this is via tool use.

IMO this was the more elegant design if you think about it: tool calling is really just structured output and structured output is tool calling. The "do not provide multiple ways of doing the same thing" philosophy.

crazylogger commented on GPT-5-Codex-Mini – A more compact and cost-efficient version of GPT-5-Codex   github.com/openai/codex/r... · Posted by u/wahnfrieden
simonw · 3 months ago
> Anecdotally, a Max subscriber gets something like $100 worth of usage per day.

Where are you getting that number from?

Anthropic added quite strict limits on usage - visible from the /usage method inside Claude Code. I would be surprised if those limits turn out to still result in expensive losses for them.

crazylogger · 3 months ago
This is just personal experience + reddit anecdotes. I've been using CC from day one (when API pricing was the only way to pay for CC), then I've been on the $20 Pro plan and am getting a solid $5+ worth of usage in each 5h session, times 5-10 sessions per week (so an overall 5-10x subsidy over one month.) And I extrapolated that $200 subscribers must be getting roughly 10x Pro's usage. I do feel the actual limit fluctuates each week as Claude Code engage in this new subsidy war with OAI Codex though.

My theory is this:

- we know from benchmarks that open-weight models like Deepseek R1 and Kimi K2's capabilities are not far behind SOTA GPT/Claude

- open-weight API pricing (e.g. on openrouter) is roughly 1/10~1/5 that of GPT/Claude

- users can more or less choose to hook their agent CLI/IDEs to either closed or open models

If these points are true, then the only reason people are primarily on CC & Codex plans is because they are subsidized by at least 5~10x. When confronted with true costs, users will quickly switch to the lowest inference cost vendor, and we get perfect competition + zero margin for all vendors.

crazylogger commented on GPT-5-Codex-Mini – A more compact and cost-efficient version of GPT-5-Codex   github.com/openai/codex/r... · Posted by u/wahnfrieden
simonw · 3 months ago
Charging developers $200/month for Claude Code and getting to a billion in ARR sounds like a pretty great business to be in to me, especially with this growth rate:

> Claude Code is reportedly close to generating $1 billion in annualized revenue, up from about $400 million in July.

https://techcrunch.com/2025/11/04/anthropic-expects-b2b-dema...

crazylogger · 3 months ago
Anecdotally, a Max subscriber gets something like $100 worth of usage per day. The more people use Claude Code, the more Anthropic loses, so it sounds like a classical "selling a dollar for 85 cents" business to me.

As soon as users are confronted with their true API cost, the appearance of this being a good business falls apart. At the end of the day, there is no moat around large language models - OpenAI, Anthropic, Google, DeepSeek, Alibaba, Moonshot... any company can make a SOTA model if they wish, so in the long run it's guaranteed to be a race to the bottom where nobody can turn a profit.

crazylogger commented on Show HN: Sosumi.ai – Convert Apple Developer docs to AI-readable Markdown   sosumi.ai/... · Posted by u/_mattt
danielfalbo · 5 months ago
How to reliably HTML to MD for any page on the internet? I remember struggling with this in the past

How hard would it be to build an MCP that's basically a proxy for web search except it always tries to build the markdown version of the web pages instead of passing HTML?

Basically Sosumi.ai but instead of working on only for Apple docs it works for any web page (including every doc on the internet)

crazylogger · 5 months ago
https://pure.md is exactly what you're looking for.

But stripping complex formats like html & pdf down to simple markdown is a hard problem. It's nearly impossible to infer what the rendered page looks like by looking at the raw html / pdf code. https://github.com/mozilla/readability helps but it often breaks down over unconventional div structures. I heard the state of the art solution is using multimodal LLM OCR to really look at the rendered page and rewrite the thing in markdown.

Which makes me wonder: how did OpenAI make their model read pdf, docx and images at all?

crazylogger commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
root_axis · 6 months ago
The suggested requirements are not engineering problems. Conceiving of a model architecture that can represent all the systems described in the blog is a monumental task of computer science research.
crazylogger · 6 months ago
I think the OP's point is that all those requirements are to be implemented outside the LLM layer, i.e. we don't need to conceive of any new model architecture. Even if LLMs don't progress any further beyond GPT-5 & Claude 4, we'll still get there.

Take memory for example: give LLM a persistent computer and ask it to jot down its long-term memory as hierarchical directories of markdown documents. Recalling a piece of memory means a bunch of `tree` and `grep` commands. It's very, very rudimentary, but it kinda works, today. We just have to think of incrementally smarter ways to query & maintain this type of memory repo, which is a pure engineering problem.

crazylogger commented on AGENTS.md – Open format for guiding coding agents   agents.md/... · Posted by u/ghuntley
stingraycharles · 6 months ago
It is. README is for humans, AGENTS / etc is for LLMs.

Document how to use and install your tool in the readme.

Document how to compile, test, architecture decisions, coding standards, repository structure etc in the agents doc.

crazylogger · 6 months ago
We have CONTRIBUTING.md for that. Seems to me the author just doesn't know about it?

u/crazylogger

KarmaCake day153May 3, 2022View Original