Readit News logoReadit News
prodigycorp · 4 days ago
The best part about this is that you know the type of people/companies using langchain are likely the type that are not going to patch this in a timely manner.
peab · 4 days ago
Langchain is great because it provides you an easy method to filter people out when hiring. Candidates who talk about langchain are, more often than not, low quality candidates.
babyshake · 3 days ago
Would you say the same for Mastra? If so, what would you say indicates a high quality candidate when they are discussing agent harnessing and orchestration?
stingraycharles · 3 days ago
What makes you say that? Because it’s popular?
deepsquirrelnet · 4 days ago
It also helps with jobseeking as well. Easy to know which places to avoid.
notnullorvoid · 3 days ago
Curious what your critique is for LangChain?

I found the general premise of the tools (in particular LangGraph) to be solid. I was never in the position to use it (not my current area of work), but had I been I may have suggested building some prototypes with it.

wilkystyle · 4 days ago
Can you elaborate? Fairly new to langchain, but didn't realize it had any sort of stereotypical type of user.
int_19h · 4 days ago
I'll admit that I haven't looked it in a while, but as originally released, it was a textbook example on how to complicate a fundamentally simple and well-understood task (text templates, basically) with lots of useless abstractions that made it all sound more "enterprise". People would write complicated langchains, but then when you looked under the hood all it was doing is some string concatenation, and the result was actually less readable than a simple template with substitutions in it.
XCSme · 4 days ago
I am not sure what's the stereotype, but I tried using langchain and realised most of the functionality actually adds more code to use than simply writing my own direct API LLM calls.

Overall I felt like it solves a problem doesn't exist, and I've been happily sending direct API calls for years to LLMs without issues.

prodigycorp · 4 days ago
No dig at you, but I take the average langchain user as one who is either a) using it because their C-suite heard about at some AI conference and had it foisted upon them or b) does not care about software quality in general.

I've talked to many people who regret building on top of it but they're in too deep.

I think you may come to the same conclusions over time.

anonzzzies · 4 days ago
Yep, not sure why anyone is using it still.
echelon · 4 days ago
These guys raised $125M at $1.3B post on $12M ARR? What.

> Today, langchain and langgraph have a combined 90M monthly downloads, and 35 percent of the Fortune 500 use our services

What? This seems crazy. Maybe I'm blind, but I don't see the long term billion dollar business op here.

Leftpad also had those stats, iirc.

blcknight · 3 days ago
What would you use instead?

I built an internal CI chat bot with it like 6 months ago when I was learning. It’s deployed and doing what everyone needs it to do.

Claude Code can do most of what it does without needing anything special. I think that’s the future but I hate the vendor lock in Anthropic is pushing with CC.

All my python tools could be skills, and some folks are doing that now but I don’t need to chase after every shiny thing — otherwise I’d never stop rewriting the damn thing.

Especially since there’s no standardizing yet on plugins/skills/commands/hooks yet.

fn-mote · 4 days ago
> The blast radius is scale

Ugh. I’m a native English speaker and this sounds wrong, massaged by LLM or not.

“Large blast radius” would be a good substitute.

I am happy this whole issue doesn’t affect me, so I can stop reading when I don’t like the writing.

morkalork · 4 days ago
Makes you think a human wrote it, which for an article on a .ai domain is kind of shocking!
croemer · 4 days ago
It's definitely LLM generated. I came here to post that, then you saw you had already pointed it out. Giveaway for me: 'The most common real-world path here is not “attacker sends you a serialized blob and you call load().” It’s subtler:'

It's not, it's; bolded items in list.

Also no programmer would use this apostrophe instead of single quote.

minitech · 3 days ago
> Also no programmer would use this apostrophe instead of single quote.

I’m a programmer who likes punctuation, and all of my pointless internet comments are lovingly crafted with Option+]. It’s also the default for some word processors. Probably not wrong about the article, though.

shahartal · 4 days ago
CVE-2025-68664 (langchain-core): object confusion during (de)serialization can leak secrets (and in some cases escalate further). Details and mitigations in the post.
threecheese · 4 days ago
Cheers to all the teams on sev1 calls on their holidays, we can only hope their adversaries are also trying to spend time with family. LangGrinch, indeed! (I get it, timely disclosure is responsible disclosure)
nextworddev · 4 days ago
Meanwhile Harrison Chase is laughing his way to the bank
croemer · 4 days ago
LLM slop. At least one clear error (hallucination): "’Twas the night before Christmas, and I was doing the least festive kind of work: staring at serialization"

Per disclosure timeline the report was made on December 4, it was definitely not the night before Christmas when you were doing the work then.

eviks · 3 days ago
Security research often looks dramatic from the outside. In reality, it is usually the mundane work of asking AI to make up dramatic stories

Deleted Comment

nubg · 4 days ago
WHY on earth did the author of the CVE feel the need to feed the description text through an LLm? I get dizzy when I see this AI slop style.

I would rather just read the original prompt that went in instead of verbosified "it's not X, it's **Y**!" slop.

iamacyborg · 4 days ago
> WHY on earth did the author of the CVE feel the need to feed the description text through an LLm?

Not everyone speaks English natively.

Not everyone has taste when it comes to written English.

nkrisc · 3 days ago
I personally find that text written by a human, even someone without a strong grasp of the language, is always preferable to read simply because each word (for better or worse) was chosen by a human to represent their ideas.

If you use an LLM because you think you can’t write and communicate well, then if that’s true it means you’re feeding content that you already believe isn’t worthy of expressing your ideas to a machine that will drag your words even further what you intended.

nubg · 4 days ago
If I want to cleanup, summarize, translate, make more formal, make more funny, whatever, some incoming text by sending it through an LLM, I can do it myself.
crote · 4 days ago
I would rather read succinct English written by a non-native speaker filled with broken grammar than overly verbose but well-spelled AI slop. Heck, just share the prompt itself!

If you can't be bothered to have a human write literally a handful of lines of text, what else can't you be bothered to do? Why should I trust that your CVE even exists at all - let alone is indeed "critical" and worth ruining Christmas over?

dorianmariecom · 4 days ago
you can use chatgpt to reverse the prompt
XCSme · 4 days ago
Not sure if it's a joke, but I don't think LLM is a bijective function.
small_scombrus · 4 days ago
ChatGPT can generate you a sentence that plausibly looks like the prompt
crtasm · 4 days ago
What're the odds they've licenced the Grinch character for use on their company blog?
rainonmoon · 3 days ago
This would make this blog notable as the first AI company to proactively respect trademark.