Readit News logoReadit News
jacobr1 commented on FCC bars providers for non-compliance with robocall protections   docs.fcc.gov/public/attac... · Posted by u/impish9208
coldpie · a day ago
Good start. Next, put the people running these scam phone providers in jail.
jacobr1 · a day ago
How many are based in the US and subject to US-based prosecution?
jacobr1 commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
jacobr1 · 13 days ago
I've found there usually is some key context that is missing. Maybe it is project structure or a sampling of some key patterns from different parts of the codebase, or key data models. Getting those into CLAUDE.md reduces the need to keep building up (as large) context.

As an example for one project, I realized things were getting better after it started writing integration tests. I wasn't sure if that was the act of writing the test forced it to reason about the they black box way the system would be used, or if there was another factor. Turns out it was just example usage. Extracting out the usage patterns into both the README and CLAUDE.md was itself a simple request, then I got similar performance on new tasks.

jacobr1 · 5 days ago
This is now a thing: https://agents.md/
jacobr1 commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
Wowfunhappy · 14 days ago
I keep reading this, but with Claude Code in particular, I consistently find it gets smarter the longer my conversations go on, peaking right at the point where it auto-compacts and everything goes to crap.

This isn't always true--some conversations go poorly and it's better to reset and start over--but it usually is.

jacobr1 · 13 days ago
I've found there usually is some key context that is missing. Maybe it is project structure or a sampling of some key patterns from different parts of the codebase, or key data models. Getting those into CLAUDE.md reduces the need to keep building up (as large) context.

As an example for one project, I realized things were getting better after it started writing integration tests. I wasn't sure if that was the act of writing the test forced it to reason about the they black box way the system would be used, or if there was another factor. Turns out it was just example usage. Extracting out the usage patterns into both the README and CLAUDE.md was itself a simple request, then I got similar performance on new tasks.

jacobr1 commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
km144 · 13 days ago
As you alluded to at the end of your post—I'm not really convinced 20k LOC is very limiting. How many lines of code can you fit in your working mental model of a program? Certainly less than 20k concrete lines of text at any given time.

In your working mental model, you have broad understandings of the broader domain. You have broad understandings of the architecture. You summarize broad sections of the program into simpler ideas. module_a does x, module_b does y, insane file c does z, and so on. Then there is the part of the software you're actively working on, where you need more concrete context.

So as you move towards the central task, the context becomes more specific. But the vague outer context is still crucial to the task at hand. Now, you can certainly find ways to summarize this mental model in an input to an LLM, especially with increasing context windows. But we probably need to understand how we would better present these sorts of things to achieve performance similar to a human brain, because the mechanism is very different.

jacobr1 · 13 days ago
This is basically how claude code works today. You have it /init a description of the project structure into CLAUDE.md that is used for each invocation. There is some implicit knowledge in the project about common frameworks and languages. Then when working on something between the explicit and implicit knowledge and the task at hand it will grep for relevant material in the project, load either full or parts of files, and THEN it will start working on the task. But it dynamically builds the context of the codebase based on searching for the relevant bit. Short-circuiting this by having a good project summary makes it more efficient - but you don't need to literally copy in all the code files.
jacobr1 commented on Show HN: Building a web search engine from scratch with 3B neural embeddings   blog.wilsonl.in/search-en... · Posted by u/wilsonzlin
ricardobeat · 14 days ago
You could argue that it is not really a search query. There is not a particular page that answers the question “correctly”, it requires collating multiple sources and reasoning. That is not a search problem.
jacobr1 · 13 days ago
And yet ... that is exactly the kind of problem average people want to have solved for them by search engines, and google kept trying to solve. It is probably one reason why ai-chat with websearch is going to beat just search.
jacobr1 commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
zarzavat · 14 days ago
Both modes of operation are useful.

If you know how to do something, then you can give Claude the broad strokes of how you want it done and -- if you give enough detail -- hopefully it will come back with work similar to what you would have written. In this case it's saving you on the order of minutes, but those minutes add up. There is a possibility for negative time saving if it returns garbage.

If you don't know how to do something then you can see if an AI has any ideas. This is where the big productivity gains are, hours or even days can become minutes if you are sufficiently clueless about something.

jacobr1 · 14 days ago
An importantly the cycle time on this stuff can be much faster. Trying out different variants, and iterating through larger changes can be huge.
jacobr1 commented on Claude Sonnet 4 now supports 1M tokens of context   anthropic.com/news/1m-con... · Posted by u/adocomplete
daymanstep · 14 days ago
> I have been able to swap to languages I have almost no experience in and work fairly well because memorizing syntax is irrelevant.

I do wonder whether your code does what you think it does. Similar-sounding keywords in different languages can have completely different meanings. E.g. the volatile keyword in Java vs C++. You don't know what you don't know, right? How do you know that the AI generated code does what you think it does?

jacobr1 · 14 days ago
Beyond code-gen I think some techniques are very underutilized. One can generate tests, generate docs, explain things line by line. Explicitly explaining alternative approaches and tradeoffs is helpful too. While, as with everything in this space, there are imperfection, I find a ton of value in looking beyond the code into thinking through the use cases, alternative approaches and different ways to structure the same thing.
jacobr1 commented on Let's get real about the one-person billion dollar company   marcrand.com/p/lets-get-r... · Posted by u/bizgrayson
burkaman · 14 days ago
This seems like an exaggeration. It was a one person company for five years, then started expanding and sold for $575M seven years later. By that time it had ~75 employees.
jacobr1 · 14 days ago
That provides a good case study. A single person can bootstrap a company, along with outsourcing and a ton of grit. Moving the boundaries on what tasks are automatable, at difference cost points, and different levels of experience required to figure it out probably have all shifted in favor of individuals or small founder groups in recent years.

Maybe you can stay bootstrapped longer with less capital. That seems true.

jacobr1 commented on An LLM does not need to understand MCP   hackteam.io/blog/your-llm... · Posted by u/gethackteam
danielrico · 19 days ago
I jumped off the boat of llm a little before MCP was a thing, so I thought that the tools were presented as needed by the prompt/context in a way not dissimilar of RAG. Isn't this the standard way?
jacobr1 · 19 days ago
You _can_ build things that way. But then you need some business logic to decide which tools to expose to the system. The easy/dumb way is just to give it all the tools. With RAG, you have retrieval step where you have hardcoded some kind of search (likely semantic) and some kind of pruning or relevance logic (maybe give the top 5 results that have at least X% relevancy matching).

With tools there is no equivalent. Maybe you could try some semantic similarity to the tool description, but I don't know of any system that does that.

What seems to be happening is building distinct "agents" that have a set of tools designed into them. An Agent is a system prompt+tools, where some of tools might be the ability to call/handoff to other agents. Each call to an agent is a new context, albeit with some limited context handed in from the caller agent. That way you are manually decomposing the project into a distinct set of sub-agents that can be concretely reasoned about and can perform a small set of related tasks. Then you need some kind of overall orchestration agent that can handle dispatch to other agents.

jacobr1 commented on Did Craigslist decimate newspapers? Legend meets reality   poynter.org/business-work... · Posted by u/zdw
AdmiralAsshat · 19 days ago
Sure, but then that meant you probably bought the odd single issue, or maybe for an extended period until you found a job. You didn't buy a subscription for the sole purpose of looking at the classifieds.
jacobr1 · 19 days ago
You bought the bundle. You got games (crosswords and such), Local/National/Intl News, local Arts, schedule info for things like movies/theater, recipes, comics, sports score and analysis, classifieds, even for the ads to see if there were shopping deals to be had.

You bought a subscription in part for the value of one or more sections, but also because culturally everyone else did too, and once you have it you probably found something interesting. Few people ever read everything in the paper front to back regularly. That is why both then and now headlines were important.

u/jacobr1

KarmaCake day2386November 13, 2013View Original