Readit News logoReadit News
djoldman · 17 days ago
> Bob needs a new computer for his job.... In order to obtain a new work computer he has to create a 4 paragraph business case explaining why the new computer will improve his productivity.

> Bob’s manager receives 4 paragraphs of dense prose and realises from the first line that he’s going to have to read the whole thing carefully to work out what he’s being asked for and why. Instead, he copies the email into the LLM.... The 4 paragraphs are summarised as “The sender needs a new computer as his current one is old and slow and makes him unproductive.” The manager approves the request.

"LLM inflation" as a "bad" thing often reflects a "bad" system.

In the case described, the bad system is the expectation that one has to write, or is more likely to obtain a favorable result from writing, a 4 paragraph business case. Since Bob inflates his words to fill 4 paragraphs and the manager deflates them to summarise, it's clear that the 4 paragraph expectation/incentive is the "bad" thing here.

This phenomenon of assigning the cause of "bad" things to LLMs is pretty rife.

In fact, one could say that the LLM is optimizing given the system requirement: it's a lot easier to get around this bad framework.

Aransentin · 17 days ago
The 4-paragraph business case was useful for creating friction, which meant that if you couldn't be bothered to write 4 paragraphs you very likely didn't need the computer upgrade in the first place.

This might have been a genuinely useful system, something which broke down with the existence of LLMs.

MontyCarloHall · 17 days ago
The only definitively non-renewable resource is time. Time is often spent like a currency, whose monetary instrument is some tangible proxy of how much time elapsed. Verbosity was an excellent proxy, at least prior to the advent of generative AI. As you said, the reason Bob needs to write 4 paragraphs to get a new PC is to prove that he spent the requisite time for that computer, and is thus serious about the request. It’s the same reason management consultants and investment bankers spend 80+ hours a week working on enormous slide decks that only ever get skimmed by their clients: it proves to the clients that the firm spent time on them, and is thus serious about the case/deal. It’s also the same reason a concise thank-you note “thanks for the invite! we had a blast!” or a concise condolence note “very sorry for your loss” get a lot less well-received than a couple verbose paragraphs on how great the event was or how much the deceased will be missed, even if all that extra verbiage confers absolutely nothing beyond the core sentiment. (The very best notes, of course, use their extra words to convey something personally meaningful beyond “thanks” or “sorry.”)

Gen-AI completely negates meaningless verbosity as a proxy of time spent. It will be interesting to see what emerges as a new proxy, since time-as-currency is extremely engrained into the fabric of human social interactions.

ryanmcbride · 17 days ago
This is the sort of workplace philosophising that I hate the most. Employees aren't children. They don't need to have artificial bullshit put up in between them and what they need, the person approving just needs to actually pay attention.

If someone wants a new computer they should just have to say why. And if it's a good reason, give it to them. If not, don't. Managers have to manage. They have to do their jobs. I'm a manager and I do my job by listening to the people I manage. I don't put them through humiliation rituals to get new equipment.

handoflixue · 17 days ago
The problem is, I'm a verbose writer and can trivially churn out 4 paragraphs - another person is going to struggle. The friction is targeting the wrong area: this is a 15 minute break for me, and an hour long nightmare for my dyslexic co-worker.

Social media will give you a good idea what sort of person enjoys writing 4 paragraphs when something goes wrong; do you really want to incentivize that?

fmbb · 17 days ago
I mean it’s all up to the employer if they want employees to be productive.

If they don’t care they don’t care. They pay most of us for our time anyway, not what we achieve.

patcon · 17 days ago
I love this article for how it gets the thinking, and I love your response.

I've been aware of similar dynamic in politics, where the collective action/intelligence of the internet destroyed all the old signals politicians used to rely on. Emails don't mean anything like letters used to mean. Even phone calls are automated now. Your words and experience matter more in a statistical big data sense, rather than individually.

---

This puts me in sci-fi world-building mode, wondering what the absurd extension is... maybe it's just proving burned time investment. So maybe in an imagined world where LLMs are available to all as extensions of thought via neural implant, you can't be taken seriously for even the simplest direct statements unless you prove your mind sat and did nothing (aka wasted it's time) for some arbitrary period of time. So if you sat in the corner and registered inactive boredom for 2h, and attached a non-renewable proof of that to a written word, then people would take your perspective seriously, because you expended (though not "gave") your limited attention/time to the request for some significant amount of time

spencerflem · 17 days ago
There's a 1992 modern art piece that was a blank canvas that the artist promised they spent 1000 hours staring directly at

https://www.mirandawhall.space/1000-hours-of-staring/

djoldman · 17 days ago
Politics is the OG bad system.

Because politicians literally write the rules of the system, it's incredibly difficult to prevent abuse, bad incentives, and inefficiency.

One of the most fundamentally broken aspects of the US system is that:

1. politicians stay in power by being elected.

2. people are not required to vote and casting a vote incurs a cost (time, travel, expense), therefore not everyone votes.

3. politicians just have to get the votes of the fraction of people who actually do vote and can ignore those who don't.

hdgvhicv · 17 days ago
The broken thing here is that Bob, costing $10k a week, is after a new computer costing $100 a week.
kelipso · 17 days ago
This part I always found funny. Significantly increase productivity of the team for a fraction of the price of employing them? Absolutely not.
neutronicus · 17 days ago
If Bob's job is anything like mine, Bob's new computer will take a week to set up
zamadatix · 17 days ago
Whether it's an additional $100/week for base salary or $100/week for overhead Bob is always going to be after another $100/week regardless what his current cost is. If Bob wants to manage the overheads directly (most don't) like salary costs then he probably wants more of a contractor type position.

As to the content of the letter, the 4 paragraphs are supposed to be "these are reasons I think were missed and why it'll cost more to not correct it" not just "I put effort to write 4 paragraphs of stuff" friction alone.

Having run a short stint as an internal IT manager at an IT focused company... it's astounding how many non-standard/out-of-cycle laptop request are actually either basic user error (even for the most brilliant technical employees) or basic IT systems problems (e.g. poorly tested management/security tool changes eating up performance/battery in certain configurations) that a new laptop won't actually solve. E.g. reports of "my battery runs out in 2 hours and my IM is dog slow" but they are on an M1/M2 MacBook Pro and probably wouldn't notice if they got an M1 or M4 MacBook back as their issue isn't actually the hardware. When someone writes an email or ticket explaining why their use case just wasn't accounted for it's generally pretty obvious they really do need something different.

jdubs1984 · 17 days ago
> Bob’s manager receives 4 paragraphs of dense prose and realises from the first line that he’s going to have to read the whole thing carefully to work out what he’s being asked for and why. Instead, he copies the email into the LLM.... The 4 paragraphs are summarised as “The sender needs a new computer as his current one is old and slow and makes him unproductive.” The manager approves the request.

Bob’s manager is lazy and or an idiot.

Probably both.

ToucanLoucan · 17 days ago
> In fact, one could say that the LLM is optimizing given the system requirement: it's a lot easier to get around this bad framework.

Sure, as long as we completely disregard the water, power and silicon wasted to accomplish this goal.

danielbln · 17 days ago
This image originally came out just around the time of ChatGPT release and captures it well: https://i.imgur.com/RHGD9Tk.png

Deleted Comment

reginald78 · 17 days ago
The important part is the GDP is now increased because of the cost of energy and additional hardware needed expand and then compress the original data. Think of the economic growth all these new hassles provide!
lubujackson · 17 days ago
Next innovation: compress the AI translation layer from both sides. I feel like there might be an unbelievable Weissman score that can be achieved!
verbify · 17 days ago
Engaging with why we might actually want inflation of text:

1) For pedagogical or explanatory purposes. For example, if I were to write:

> ∀x∈R,x^2≥0

I've used 10 characters to say

> For every real number x, it's square is greater than or equal to zero

For a mathematician, the first is sufficient. For someone learning, the second might be better (and perhaps as expansion of 'real number' or that 'square' is 'multiplying it by itself').

2) To make sure everything is stated and explicit. "He finally did x" implies that something has been anticipated/worked on for awhile, but "after a period of anticipation he did x" makes it more clear. This also raises the question of who was anticipating, which could be made explicit too.

As someone who spends a lot of time converting specifications to code (and explaining technical problems to non-technical people), unstated assumptions are very prevalent. And then sometimes people have different conceptions of the unstated assumption (i.e. some people might think that nobody was anticipating, it just took longer than you'd expect otherwise).

So longer text might seem like a simple expansion, but then it ends up adding detail.

I definitely agree with the authors point, I just want to argue that having a text-expander tool isn't quite as useless as 'generate garbage for me'.

nemomarx · 17 days ago
Can a generator do things like 2 if it wasn't in the input text?
lblume · 17 days ago
Ambiguity is resolved via provided context, but just as with conversations, this context may be severely underspecified.
antonvs · 17 days ago
Yes, because generators generate at the token level, which is technically smaller than an individual word. They can easily generate unique sentences, and for example transfer learning allows them to apply knowledge obtained from some other training data to new domains.

The idea that generators are some sort of parrot is very outdated. The 2021 paper that coined the term "stochastic parrot" was already wrong when it was published.

msgodel · 17 days ago
If I need it expanded I can put it into my LLM myself.
nathan_compton · 17 days ago
The older I get the more concise I find myself (which is not to say I'm actually concise, as my comment history will demonstrate), but LLM's have really driven home just how much noise day to day communication involves. So much filler text.

It still surprises me when I see non-technical enthusiasts get excited about LLMs drafting almost useless copy or email or whatever. So much garbage text no one reads but has to be written for some reason. Its weird.

integralid · 17 days ago
"I wrote this mail slightly longer because I didn't have time to make it short" - someone famous

When writing something I want people to read, I always take time at the end to make it shorter - remove distracting sentences, unnecessary adjectives and other noise. Really works wonders for team communication.

wafflemaker · 17 days ago
>When writing something I want people to read, I always take time at the end to make it shorter - remove distracting sentences, unnecessary adjectives and other noise.

This is a good advice. How can I do it when talking? I often talk too much saying little, often loosing Listener's attention in the process.

nicbou · 17 days ago
I write guides for a living, and my audience is largely comprised of non-native speakers. I write simply and unambiguously, and I've been told multiple times that my style seeps through my blog posts, my comments and my text messages.

You are so right! LLMs produce so much noise. If you ask them to be concise, they struggle to cut just the fat, and the output is often vague or misleading. I see that again and again when I ask it to produce different versions of a sentence.

I imagine it's how artists feel about AI art. It seems right at first glance, but you can tell that no thought or craftsmanship went into it.

Gigachad · 17 days ago
On one side you have people using LLMs to fluff a sentence in to an essay. And on the receiver side they are hitting a button to AI summarise it back to a sentence.

What incredible technology.

unglaublich · 17 days ago
An LLM is effectively a compressed model of its input data.

Inference is then the decompression stage where it generates text from the input prompt and the compressed model.

Now that compressing and decompressing texts is trivial with LLMs, we humans should focus - in business at least - on communicating only the core of what we want to say.

If the argument to get a new keyboard is: "i like it", then this should suffice, for inflated versions of this argument can be trivially generated.

AIPedant · 17 days ago
What I hate about this is that often a novel and interesting idea truly needs extra space to define and illustrate itself, and by virtue of its novelty LLMs will have substantially more difficulty summarizing it correctly. But it sounds like we are heading to a medium-term where people cynically assume any long email must be LLM-generated fluff, and hence nothing is lost by asking for an LLM summary.

What a horrible technology.

danielbln · 17 days ago
Not to be overly snide, but I can imagine that almost every person who writes long, tedious emails that wax on and on thinks they have something novel and interesting that truly needs extra space. Also, most novel things are composed of pedestrian things, which LLMs have no issue summarizing sufficiently.

Maybe you can provide an example where this case would occur, and maybe some indication how often you think this would occur.

onlyrealcuzzo · 17 days ago
> If the argument to get a new keyboard is: "i like it", then this should suffice

This seems like exactly what LLMs are supposed to be good at, according to you, so why don't they just near-losslessly compress the data first, and then train on that?

Also, if they're so good at this, then why are their answers often long-winded and require so much skimming to get what I want?

I'm skeptical LLMs are accurately described as "near lossless de/compression engines".

If you change the temperature settings, they can get quite creative.

They are their algorithm, run on their inputs, which can be roughly described as a form of compression, but it's unlike the main forms of compression we think of - and it at least appears to have emergent decompression properties we aren't used to.

If you up the lossy-ness on a JPEG, you don't really end up with creative outputs. Maybe you do by coincidence, and maybe you only do with LLMs - but at much higher rates.

Whatever is happening does not seem to be what I think people typically associate with simple de/compression.

Theoretically, you can train an LLM on all of Physics, except a few things, and it could discover the missing pieces through reasoning.

Yeah, maybe a JPEG could, too, but the odds of that seem astronomically lower.

ACCount36 · 17 days ago
If you don't design your compressor to output data that can be compressed further, it's going to trash compressibility.

And if you find a way to compress text that isn't insanely computationally expensive, and still makes the compressed text compressible by LLMs further - i.e. usable in training/inference? You, basically, would have invented a better tokenizer.

A lot of people in the industry are itching for a better tokenizer, so feel free to try.

tomrod · 17 days ago
The inverse of this is "AI Loopidity" where we burn cycles inflating then deflating information (in emails, say, or in AI code that blows up then gets reduced or summarized). This often also leads to weird comms outcomes, like saving a jpg at 85% a dozen times.
wood_spirit · 17 days ago
And each cycle introducing error like a game of telephone
nicbou · 17 days ago
An early article about LLMs called it "a blurry jpeg of the web" for exactly those reasons. It's a good read.
1980phipsi · 17 days ago
Be more trivial
nlawalker · 17 days ago
Long documents in business contexts that get summarized and go mostly unread are the byproduct of a specific and common level of trust and accountability in those contexts: people don't believe someone has done enough critical thinking or has a strong enough justification for a proposal unless they've put it on the page, but if it is on the page, it's assumed that it does in fact represent critical thinking and legitimate justification.

If trust was higher, shorter documents would be more desirable. If trust was lower, or accountability higher, summarization would be used a lot more carefully.

LLMs haven't changed anything in this regard except that they've made it extremely easy to abuse trust at that specific level. The long-term result will be that trust will fall in the general case, and people will eventually become more careful about using summarization. I don't think it will be long before productized AI used in business contexts will be pretrained/fine-tuned to perform a basic level of AI content detection or include a qualitative measure of information density by default when performing summarization.

ruuda · 17 days ago
I consider inflation a double insult. (https://ruudvanasseldonk.com/2025/llm-interactions) It says "I couldn't be bothered to spend time writing this myself, but I'm expecting you to read all the fluff."
hangonhn · 17 days ago
To be fair, the recipient probably isn't reading it either. They're getting it summarized. LLMs are creating more work for other LLMs.
numpad0 · 17 days ago
> That we are using LLMs for inflation should not be taken as a criticism of these wonderful tools. It might, however, make us consider why we find ourselves inflating content. At best we’re implicitly rewarding obfuscation and time wasting; at worst we’re allowing a lack of clear thinking to be covered up. I think we’ve all known this to be true, but LLMs allow us to see the full extent of this with our own eyes. Perhaps it will encourage us to change!

Yeah, this is the problem. Wealth distribution stopped working sometime in the late 20th century and we're fighting each others for competitive advantages. That's the core of this phenomenon.

No one needs containers full of baby sized left shoes, but proof of work must be shown. So the leathers must be cut and shoes must be sewn, only to be left in the ever growing pile in the backyard. That's kind of wrong.