The best part about this is that you know the type of people/companies using langchain are likely the type that are not going to patch this in a timely manner.
Langchain is great because it provides you an easy method to filter people out when hiring. Candidates who talk about langchain are, more often than not, low quality candidates.
Would you say the same for Mastra? If so, what would you say indicates a high quality candidate when they are discussing agent harnessing and orchestration?
I found the general premise of the tools (in particular LangGraph) to be solid. I was never in the position to use it (not my current area of work), but had I been I may have suggested building some prototypes with it.
I'll admit that I haven't looked it in a while, but as originally released, it was a textbook example on how to complicate a fundamentally simple and well-understood task (text templates, basically) with lots of useless abstractions that made it all sound more "enterprise". People would write complicated langchains, but then when you looked under the hood all it was doing is some string concatenation, and the result was actually less readable than a simple template with substitutions in it.
I am not sure what's the stereotype, but I tried using langchain and realised most of the functionality actually adds more code to use than simply writing my own direct API LLM calls.
Overall I felt like it solves a problem doesn't exist, and I've been happily sending direct API calls for years to LLMs without issues.
No dig at you, but I take the average langchain user as one who is either a) using it because their C-suite heard about at some AI conference and had it foisted upon them or b) does not care about software quality in general.
I've talked to many people who regret building on top of it but they're in too deep.
I think you may come to the same conclusions over time.
I built an internal CI chat bot with it like 6 months ago when I was learning. It’s deployed and doing what everyone needs it to do.
Claude Code can do most of what it does without needing anything special. I think that’s the future but I hate the vendor lock in Anthropic is pushing with CC.
All my python tools could be skills, and some folks are doing that now but I don’t need to chase after every shiny thing — otherwise I’d never stop rewriting the damn thing.
Especially since there’s no standardizing yet on plugins/skills/commands/hooks yet.
It's definitely LLM generated. I came here to post that, then you saw you had already pointed it out. Giveaway for me: 'The most common real-world path here is not “attacker sends you a serialized blob and you call load().” It’s subtler:'
It's not, it's; bolded items in list.
Also no programmer would use this apostrophe instead of single quote.
> Also no programmer would use this apostrophe instead of single quote.
I’m a programmer who likes punctuation, and all of my pointless internet comments are lovingly crafted with Option+]. It’s also the default for some word processors. Probably not wrong about the article, though.
CVE-2025-68664 (langchain-core): object confusion during (de)serialization can leak secrets (and in some cases escalate further). Details and mitigations in the post.
Cheers to all the teams on sev1 calls on their holidays, we can only hope their adversaries are also trying to spend time with family. LangGrinch, indeed! (I get it, timely disclosure is responsible disclosure)
LLM slop. At least one clear error (hallucination): "’Twas the night before Christmas, and I was doing the least festive kind of work: staring at serialization"
Per disclosure timeline the report was made on December 4, it was definitely not the night before Christmas when you were doing the work then.
I personally find that text written by a human, even someone without a strong grasp of the language, is always preferable to read simply because each word (for better or worse) was chosen by a human to represent their ideas.
If you use an LLM because you think you can’t write and communicate well, then if that’s true it means you’re feeding content that you already believe isn’t worthy of expressing your ideas to a machine that will drag your words even further what you intended.
If I want to cleanup, summarize, translate, make more formal, make more funny, whatever, some incoming text by sending it through an LLM, I can do it myself.
I would rather read succinct English written by a non-native speaker filled with broken grammar than overly verbose but well-spelled AI slop. Heck, just share the prompt itself!
If you can't be bothered to have a human write literally a handful of lines of text, what else can't you be bothered to do? Why should I trust that your CVE even exists at all - let alone is indeed "critical" and worth ruining Christmas over?
I found the general premise of the tools (in particular LangGraph) to be solid. I was never in the position to use it (not my current area of work), but had I been I may have suggested building some prototypes with it.
Overall I felt like it solves a problem doesn't exist, and I've been happily sending direct API calls for years to LLMs without issues.
I've talked to many people who regret building on top of it but they're in too deep.
I think you may come to the same conclusions over time.
> Today, langchain and langgraph have a combined 90M monthly downloads, and 35 percent of the Fortune 500 use our services
What? This seems crazy. Maybe I'm blind, but I don't see the long term billion dollar business op here.
Leftpad also had those stats, iirc.
I built an internal CI chat bot with it like 6 months ago when I was learning. It’s deployed and doing what everyone needs it to do.
Claude Code can do most of what it does without needing anything special. I think that’s the future but I hate the vendor lock in Anthropic is pushing with CC.
All my python tools could be skills, and some folks are doing that now but I don’t need to chase after every shiny thing — otherwise I’d never stop rewriting the damn thing.
Especially since there’s no standardizing yet on plugins/skills/commands/hooks yet.
Ugh. I’m a native English speaker and this sounds wrong, massaged by LLM or not.
“Large blast radius” would be a good substitute.
I am happy this whole issue doesn’t affect me, so I can stop reading when I don’t like the writing.
It's not, it's; bolded items in list.
Also no programmer would use this apostrophe instead of single quote.
I’m a programmer who likes punctuation, and all of my pointless internet comments are lovingly crafted with Option+]. It’s also the default for some word processors. Probably not wrong about the article, though.
Per disclosure timeline the report was made on December 4, it was definitely not the night before Christmas when you were doing the work then.
Deleted Comment
I would rather just read the original prompt that went in instead of verbosified "it's not X, it's **Y**!" slop.
Not everyone speaks English natively.
Not everyone has taste when it comes to written English.
If you use an LLM because you think you can’t write and communicate well, then if that’s true it means you’re feeding content that you already believe isn’t worthy of expressing your ideas to a machine that will drag your words even further what you intended.
If you can't be bothered to have a human write literally a handful of lines of text, what else can't you be bothered to do? Why should I trust that your CVE even exists at all - let alone is indeed "critical" and worth ruining Christmas over?