Readit News logoReadit News

Deleted Comment

calebkaiser commented on Context engineering   chrisloy.dev/post/2025/08... · Posted by u/chrisloy
amonks · 2 months ago
long shot, apropos of nothing, just recognized your name:

If you are the cincinnatian poet Caleb Kaiser, we went to college together and I’d love to catch up. Email in profile.

If you aren’t, disregard this. Sorry to derail the thread.

calebkaiser · 2 months ago
Hello friend!
calebkaiser commented on Context engineering   chrisloy.dev/post/2025/08... · Posted by u/chrisloy
timr · 2 months ago
There’s nothing particularly wrong with the article - it’s a superficial summary of stuff that has historically happened in the world of LLM context windows.

The problem is - and it’s a problem common to AI right now - you can’t generalize anything from it. The next thing that drives LLMs forward could be an extension of what you read about here, or it could be a totally random other thing. There are a million monkeys tapping on keyboards, and the hope is that someone taps out Shakespeare’s brain.

calebkaiser · 2 months ago
I don't really understand this line of criticism, in this context.

What would "generalizing" the information in this article mean? I think the author does a good job of contextualizing most of the techniques under the general umbrella of in-context learning. What would it mean to generalize further beyond that?

calebkaiser commented on Context engineering   chrisloy.dev/post/2025/08... · Posted by u/chrisloy
voidhorse · 2 months ago
There is nothing precise about crafting prompts and context—it's just that, a craft. Even if you do the right thing and check some fuzzy boundary conditions using autoscorers, the model can still change out from beneath you at any point and totally alter the behavior of your system. There is no formal language here. After all, mathematics exists because natural language is notoriously imprecise.

The article has some good practical tips and it's not on the author but man I really wish we'd stop abusing the term "engineering" in a desperate attempt to stroke our own egos and or convince people to give us money. It's pathetic. Coming up with good inputs to LLMs is more art than science and it's a craft. Call a spade a spade.

calebkaiser · 2 months ago
I think it's fair to question the use of the term "engineering" throughout a lot of the software industry. But to be fair to the author, his focus in the piece is on design patterns that require what we'd commonly call software engineering to implement.

For example, his first listed design pattern is RAG. To implement such a system from scratch, you'd need to construct a data layer (commonly a vector database), retrieval logic, etc.

In fact I think the author largely agrees with you re: crafting prompts. He has a whole section admonishing "prompt engineering" as magical incantations, which he differentiates from his focus here (software which needs to be built around an LLM).

I understand the general uneasiness around using "engineering" when discussing a stochastic model, but I think it's worth pointing out that there is a lot of engineering work required to build the software systems around these models. Writing software to parse context-free grammars into masks to be applied at inference, for example, is as much "engineering" as any other common software engineering project.

calebkaiser commented on Context engineering   chrisloy.dev/post/2025/08... · Posted by u/chrisloy
sgt101 · 2 months ago
Why would I believe that any of this works? This is just some blokes idea of what people should do.

There is no evidence offered. No attempt to measure the benefits.

calebkaiser · 2 months ago
Most of the inference techniques (what the author calls context engineering design patterns) listed here originally came from the research community, and there are tons of benchmarks measuring their effectiveness, as well as a great deal of research behind what is happening mechanistically with each.

As the author points out, many of the patterns are fundamentally about in-context learning, and this in particular has been subject to a ton of research from the mechanistic interpretability crew. If you're curious, I think this line of research is fascinating: https://transformer-circuits.pub/2022/in-context-learning-an...

calebkaiser commented on Context engineering   chrisloy.dev/post/2025/08... · Posted by u/chrisloy
elteto · 2 months ago
Are there any open source examples of good context engineering or agent systems?
calebkaiser · 2 months ago
Any of the "design patterns" listed in the article will have a ton of popular open source implementations. For structured generation, I think outlines is a particularly cool library, especially if you want to poke around at how constrained decoding works under the hood: https://github.com/dottxt-ai/outlines
calebkaiser commented on Context engineering   chrisloy.dev/post/2025/08... · Posted by u/chrisloy
aeve890 · 2 months ago
I'd say this shit is even worse that "good at googling". Literal incantation for stochastic machines is like just two notches above checking the horoscope.
calebkaiser · 2 months ago
Based on the comments, I expected this to be slop listing a bunch of random prompt snippets from the author's personal collection.

I'm honestly a bit confused at the negativity here. The article is incredibly benign and reasonable. Maybe a bit surface level and not incredibly in depth, but at a glance, it gives fair and generally accurate summaries of the actual mechanisms behind inference. The examples it gives for "context engineering patterns" are actual systems that you'd need to implement (RAG, structured output, tool calling, etc.), not just a random prompt, and they're all subject to pretty thorough investigation from the research community.

The article even echoes your sentiments about "prompt engineering," down to the use of the word "incantation". From the piece:

> This was the birth of so-called "prompt engineering", though in practice there was often far less "engineering" than trial-and-error guesswork. This could often feel closer to uttering mystical incantations and hoping for magic to happen, rather than the deliberate construction and rigorous application of systems thinking that epitomises true engineering.

calebkaiser commented on CompileBench: Can AI Compile 22-year-old Code?   quesma.com/blog/introduci... · Posted by u/jakozaur
peatmoss · 3 months ago
Though this is more "LLM uses a variety of open source tools and compilers to compile source," I do wonder about whether there will eventually be a role for transformers in compiling code.

I've mentioned this before, but "sufficiently smart compiler" would be the dream here. Start with high level code or pseudo code, end up with something optimized.

calebkaiser · 3 months ago
There's been a decent chunk of research in this direction over the years. Michael O'Boyle is pretty active as a researcher in the space, if you're looking for stuff to read: https://www.dcs.ed.ac.uk/home/mob/
calebkaiser commented on Important machine learning equations   chizkidd.github.io//2025/... · Posted by u/sebg
bee_rider · 4 months ago
Are eigenvalues or singular values used much in the popular recent stuff, like LLMs?
calebkaiser · 4 months ago
LoRa uses singular value decomposition to get the low rank matrices. In different optimizers, you'll also see eigendecomposition or some approximation used (I think Shampoo does something like this, but it's been a while).

u/calebkaiser

KarmaCake day782October 25, 2019
About
https://twitter.com/KaiserFrose
View Original