Readit News logoReadit News
flatline · a year ago
> It’s like my whole computer is a toddler screaming “LET ME DO IT!” every time I try to create something.

Every autocorrect or auto suggestion ever has felt like this to me, but the volume has been turned up to 11. The otherwise drab Adobe Reader is covered with colorful sparkly buttons and popups suggesting I need not even read the document because it can give me “insights.” First, no you may not ready my proprietary document, nor do I suspect most people using this particular software - I only have it for digital signatures - have permission to share IP with a third party. But mostly, it can sometimes be a useful tool, and the fact everyone is shoving it in my face reeks of desperation.

The tech industry is in real trouble.

rsynnott · a year ago
Thing is, we've been here before in a much more limited way; people _hated_ it when Microsoft's demonic paperclip did this in Office. "It looks like you're writing a letter". _Hated_ it.

It is unclear what the industry thinks has changed, that people will now welcome "It looks like you're [whatever]".

jorgeleo · a year ago
The people. The people changed.

This forum (HN) attracts certain population that wants to do things, to understand, to share relatively well based opinions and have a discussion.

But look around, look at the new hires in the other departments. And by new I mean young, in their 20. A lot of them welcome this kind of things, they evaluate by popularity and likes. The marketing begin the AI bubble knows this, and so it pushes for it. Make it popular is more important than make it useful, because there is a tipping point were is popular enough that we capitulate.

Turns out that Idiocracy is not that far behind (https://www.imdb.com/title/tt0387808/)

soerxpso · a year ago
The goal with most of these AI features is not to solve a real problem users are having, it's to add a feature that uses AI. This will not change because it's not wrong of the individuals making the decision. The project manager gets to say he shipped a cutting-edge AI project. The developers all get to put experience working with very hireable technologies at a serious company on their resume. There will be no adverse impact to the bottom line, because the cost to develop the shitty AI feature is a drop in the bucket, and the cost to create a competing product that accomplishes the core thing users are using that product for but without feature bloat would be very high, and probably unsuccessful since "less feature bloat" has never been sufficient to break the static friction threshold for users to switch.

So it won't change, because there is no lesson to learn. No individual involved acted irrationally.

the_snooze · a year ago
It's a design that's in companies' best interests. You can have a computer that's a "friend." One that you trust but ultimately has a mind of its own. This contrasts with a computer that's merely a tool, that serves you exclusvely at your pleasure and has zero agency of its own.

Which approach gives companies more control over users? Which one allows companies to sell that access to the highest bidder?

dspillett · a year ago
Clippy (and his predecessors, he wasn't one of the first avatars for the feature) might not have been so bad, but marketing got hold of it and decided it didn't pop up often enough for them to really make a big thing of, so it was tuned up to an irritating level.

> It is unclear what the industry thinks has changed

The demographics of computer (and other device) use have changed massively since the late 90s, and the suggestion engines are much more powerful.

I still want it all to take a long walk off a short peer, but a lot of people seem happy with it bothering them.

hibikir · a year ago
If the automation is much better at the task than I am, then I am happy to donate it the responsibility: It's a matter of accuracy. Clippy kind of sucked even when he was right about what I was trying to do. For many things, the LLMs are getting good enough to outperform me
kjkjadksj · a year ago
The customer base for computing has expanded probably 3 or 4 fold or more from those windows xp days in the US. Maybe for the subset of the population that was word processing back then it was annoying. But now we are looking at a different pie entirely where that subset of annoyed power users is but a tiny sliver. There are people today who have no experience even with a desktop os.
fragmede · a year ago
Thing is, Gmail's been doing this ~forever with quick replies to emails, now it's just doing longer replies instead of "that's great, thanks" level of replies.
user9925 · a year ago
Kind of silly to compare LLMs to clippy...
eithed · a year ago
But clippy didn't write the letter for me = if I can be lazy and AI formats what I'm communicating in a way that is accessible to other people, then why should I care.
Rendello · a year ago
After a recent Show HN, I got an email from someone saying that they'd set up a page for my 'product' on their product showcase startup site. I followed the link and saw my open-source project pitched as ChatGPT slop. It felt like a violation because it wasn't just an aggregated link, but a rewrite of my readme with an associated 'pitch'.
CharlesW · a year ago
I recommend reporting this to dang at hn@ycombinator.com. I imagine that he'd be interested in someone crawling HN in order to send automated lead generation spam.
sneak · a year ago
Somehow I don’t have this problem with notepad.exe or vim or pandoc or imagemagic or textedit.app or resolve or blender.

Maybe it isn’t the tech industry, and just consumer-facing apps.

layer8 · a year ago
XorNot · a year ago
Open source has been looking better and better lately because it's not in a mad rush to bolt "AI" features onto it (an LLM will do something) and then shove a huge amount of interface in your face to try to get you to use it.

On some level it's enormously baffling that this was the thing they decided they needed to do...conversely Adobe Reader on my phone won't shutup about liquid mode either (which uploads to Adobe servers) and Microsoft and Google's solution to "people don't want to use our AI assistants" was to ensure they literally can't be disabled or removed.

passwordoops · a year ago
Consumer-facing apps are made by the tech industry, so it is and industry problem
almostnormal · a year ago
Notepad is attempting to fix spelling without asking.
pjc50 · a year ago
> no you may not ready my proprietary document, nor do I suspect most people using this particular software - I only have it for digital signatures - have permission to share IP with a third party.

This is a massive liability that almost everybody seems to be ignoring. My employer has a ban on using AI on IP until this is properly resolved, because we actually care about it leaking.

Maybe an Information Commissioner will get round to issuing a directive some time in the mid-2030s about how none of this complies with GDPR.

mrweasel · a year ago
> My employer has a ban on using AI on IP until this is properly resolved, because we actually care about it leaking.

Yet I can almost guarantee you that someone has put something they shouldn't through ChatGPT, because they either feel like it's a dumb rule, that should not apply to them, or they where in a hurry and what are the odds of them getting caught.

UweSchmidt · a year ago
I think in general, no major liability issue will come up:

- if everyone is doing it, you can't really fault anyone

- on some level we are, or will be, kinda dependent on that AI and opting out will probably be made unpleasant via dark patterns as usual

- no pushback to every piece of software, including at the operating system level, slurping all the keystrokes and data, let alone the data that's already in the cloud - big tech knows everything about us but to my surprise no major public leak has happened, i.e. one where you really can see your neighbor's private data without buying leaked data from someone on the dark web or wherever

- things are moving too fast, and you don't know if you can afford to have your programmers not use tomorrow's AI, for example, so your "bans" will have to be soft etc., this limits the potential pushback and outrage

maddmann · a year ago
A blanket ban on ai seems like shooting yourself in the foot. What about local models, on prem, or using private azure instances?
burkaman · a year ago
Obviously the AI version is bland and terrible, but arguably more importantly it has also completely changed the meaning of the message. The AI version:

- apologizes

- implies the recipient was "promised" this email as a "response" to something

- blames a hectic schedule

- invites questions

None of this was in or was even implied in the original. This is not a "polished" version, it's just a straight-up different email. I thought that style transfer while maintaining meaning was one of the few things LLMs can be good at, but this example fails even that low bar.

cmrdporcupine · a year ago
The AI has some ... "ideas" ... of its own on what workplace relationships apparently need to be like.
escapecharacter · a year ago
A lot of people who want to replace most human interactions with LLMs assume that there is some objective set of cultural values true in all contexts, and that it is good and easy to encode these as axioms into an AI.
bunderbunder · a year ago
Have you read the papers on how they optimize these LLMs for demeanor?

AI exists in a Matrix where toxic positivity is enforced with electric shocks.

Deleted Comment

nicbou · a year ago
And those ideas seem far more in line with millenial Silicon Valley culture. It's weird when they expect Germans to fake that sort of overly formal, overly cheery tone. People just don't talk like that.
polynomial · a year ago
Correct. This is called the production of subjectivity.

(tyrna be funny not patronizing. but the machinery of subjectivity production is ofc very real)

Deleted Comment

yawnxyz · a year ago
this is like when my manager once yelled at me for not writing in corp speak enough
Freak_NL · a year ago
“OK Fine. But could you at least yell at me in corp speak?”

It's no surprise LLMs are using corp speak and vapid marketing prose as a template. There is so much of it out there.

This is from that Autodesk post last week where they admitted their mistake and… Nope it's corp speak:

“We are excited to share some important updates regarding Archiving and our Idea Boards and Forums that aim to enhance your experience and ensure valuable content remains accessible. Please read the details below to understand how these changes might impact you.”

Barf. But to an LLM this looks like a human communicating in a meaningful way.

retropragma · a year ago
It's just shitty prompt design
mronetwo · a year ago
Well... that's a very 2025 sentence...
summermusic · a year ago
No, it's because the AI makes shit up. No amount of prompting will fix this.
bloomingkales · a year ago
There is no way in hell anyone who knows me would get that email and not think I’d been abducted

This person cares about not putting up a fake identity. That's pretty cool, but social media has exposed that a large number of people are perfectly fine presenting an illusion. People will have no shame passing off well written things as an output of their talent and hard work. Digital makeup has no bounds.

vikramkr · a year ago
If you care about putting up a fake identity this is still bad. Social media is all about being distinct and grabbing attention. Getting samified into a bland featureless identity isn't the same as as carefully crafting a persona to maximize clicks
ragazzina · a year ago
> People will have no shame passing off well written things as an output of their talent and hard work.

Sometimes I don't want to waste my time crafting a professional e-mail to a bunch of jerks full of themselves. Maybe I want to write it as it comes off my brain, and let my digital scribe to reformulate it so that the people reading it feel respected/validated/flattered. Am I putting up a fake identity then? Am I presenting an illusion of professionalism? Maybe writing "Best regards" instead of "Bye" is the facade of professionalism in the first place.

finnthehuman · a year ago
> Am I putting up a fake identity then?

When you did it manually you were putting up a fake identify. ofc using an AI to fake you being fake for work would be fake.

The idea that our work personas aren't at least a little fake is toxic. Depending on where you work it might be a lot fake.

Wear your character as lightly as a cap, don't get tricked into method acting.

fragmede · a year ago
"Best Regards" vs "Bye" is one thing, but unless you're the owner of the company, sending a client "fuck you, pay me" just isn't professional and is probably going to get you fired.
ceejayoz · a year ago
I mean, I hear that. I was asked to be "nicer" in emails once, and when pressed for specific changes, was finally asked to occasionally say "Thanks!" as my sign-off instead of "Thanks,".

The "bunch of jerks full of themselves" likely aren't reading the emails now; we're burning immense amounts of energy for your politeness to be generated, and distilled out at the other end into a no-nonsense summary missing all the niceties another AI just added.

djeastm · a year ago
It's obviously a personal thing, but I even feel a little guilty clicking the autosuggested "thanks" when responding to a text. Everyone has the threshold they're comfortable with.
codr7 · a year ago
I see no problem, assholes deserve bullshit.
boneitis · a year ago
With the normalization of default workflow to chuck all comms through an LLM filter settling in these days, I don't think it's even people trying to pass off illusions as their own persona. All it takes is a copy-paste and hitting the Make-Me-Some-Text button. I'm sure the responses will be frustratingly amusing if you were to press them and call them out on it (including trying to pass off the illusion).

Many people didn't think about what they are trying to convey (or self-analysed how they present themselves) when drafting correspondence in the past; now, many people think just as not-hard and often continue, like before, to neglect to meaningfully proofread whatever they had the LLMs generate for them before hitting Send.

Of course, I don't like it. But in some ways, it's just not a whole lot different from what it was before in that you can often still tell apart the people who care to be articulate from those who don't. Though, I feel bad for people disproportionately waylaid by the new paradigm like the bug/security responders on the curl project.

WaitWaitWha · a year ago
indeed!

At a high level I see convergence of styles, topics, behaviors to a generic form, both in "AI" and social media. Which to me suggest that the "AI" solutions are doing exactly what we would do ourselves, just faster.

Dead Comment

thrwaway1985882 · a year ago
I've only recently started using AI, and have discovered my use or rejection of it is predicated on my feelings for the task. This argument of "authenticity" really resonates.

I'm a manager, so when I'm sending emails to a customer or talking with one of my reports, I care deeply - so you might get some overwrought florid prose, but it's my overwrought florid prose.

On the other hand, I have to lead a weekly meeting that exists solely to provide evidence for compliance reasons, something out of the CIA's sabotage field manual that David Graeber has probably written about. But is now a thirty second exercise in uploading a transcript to ChatGPT, prompting for three evidentiary bulletpoints, and pasting the output in a wiki no human will ever read.

SkyBelow · a year ago
I was thinking about the authenticity of my writing earlier this week and wondering why I have no problem accepting code from an AI and committing it, but I find the idea of passing off an AI's writing as my own feels not just wrong, but immoral on the level of purposeful plagiarism. I feel a distinct difference, but I'm not particular clear why. I'm okay with sharing AI writing, but only when I've clearly communicated it was written by AI.

Probably related to why I can copy a piece of code from elsewhere (with sufficient work to verify it does what I expect and only what I expect) but I don't copy a quote and use it as my own. My words are my words. My code doesn't have the same guarantee.

nicbou · a year ago
Code uses a simplified set of instructions to instruct a computer to do things. Hopefully these instructions can be understood and maintained by a human.

Writing uses the entire breadth of human language to convey information between human beings with unique and complex understandings of the universe. If those words come from a machine that is not you - that is not someone - you ought to disclose it.

the_snooze · a year ago
It's probably because communication is a complex dance between humans, where you're constantly signaling that you're part of some group with the other person. Think of any profession or team, where members share common ways of speaking: jargon, inside jokes, terms of art, terms of endearment, etc. It's useful for cohesion, trust, and efficiency because you're assured that the person you're talking to is indeed "one of us."

If you use an AI to communicate, then you either fail to mimic those group membership signals and you look like an idiot. Or you succeed and show that a machine can fool humans at this game. Any grifter can come along and establish trust in a group by relying on this tech. This dance that humans have been doing since the dawn of time suddenly breaks down, and that doesn't feel good.

nicbou · a year ago
That's also what I do. I hand-write every email because these words have my name under them. On the other hand, if I'm asking the tax office to issue a specific document, I let AI handle it.
ajmurmann · a year ago
I wonder how people feel about "dumber" tools like hemingway.app that make mechanical suggestions for readability like suggesting simple synonyms and highlighting sentences that are too long. I've used it for writing documents that important and I knew a lot of people would read.
unsupp0rted · a year ago
I’m hoping part of the ai revolution will be to eliminate overweight florid prose. The excuse can be “it’s terse because AI wrote it”.
tlonny · a year ago
AI "polishing" tools are essentially a form of anti-compression. Lets take some information represented concisely and needlessly pad it with formalities and waffle so it appears more "professional" (whilst also throwing away useful "metadata" like message tone).

No doubt the recipient will also be using some form of AI summarization that strips away all that added "polish" - making the whole exercise entirely redundant!

It just feels absurd.

xnorswap · a year ago
AI has simultaneously created two industries: One around using AI to pad out documents and another one around summarising with AI.

The more the first pads, the more the second is needed.

If AI really were Intelligent, I'd fear it's an organism making sure it's needed in the ecosystem.

whywhywhywhy · a year ago
Why read something someone couldn't be bothered to write.
codr7 · a year ago
Yeah, and once that happens, why read anything from them ever agin?
pimlottc · a year ago
> No doubt the recipient will also be using some form of AI summarization that strips away all that added "polish" - making the whole exercise entirely redundant!

Not entirely, there’s still the energy usage and stock price increases. All because everyone’s too anxious to just talk to each other directly.

codr7 · a year ago
My (evidently mentally disabled) previous manager was so proud of being able to use AI to generate the bullshit he sent out to clients. What the morons are really doing is proving they're useless, let them.
pjc50 · a year ago
You have an individual and unique way of speaking and writing? We're going to have to polish that out with the slop machine, citizen.
bayindirh · a year ago
I use Grammarly to check for errors I make during writing more serious stuff (English is not my native language), but any suggestion it sends in my way changes the tone of the text so much that it sounds like it's written by a PR agency with a fake, forced attitude; sounding bland and colorless.

So no, thank you. Correct my textbook punctuation mistakes, and leave my wordy and "not positive enough" sentences to me.

EugenioPerea · a year ago
I'm getting increasingly irritated by Grammarly's attempts to boringify my writing. I've even considered doing away with it entirely, even if it means I have to do my own spell-checking.
__MatrixMan__ · a year ago
I'm working on a dystopia where the resistance is using text-in-text steganography to coordinate, so unpolished communication is flagged for extra scrutiny because all those stylistic choices might be hiding something.
shagie · a year ago
Ever read The Freeze Frame Revolution?

Though, its stenography is a bit more obvious given the "you've got to be able to read it".

The kindle version of the book starts with https://imgur.com/uIBjwlQ

This would give you the opportunity to have another ending to the book.

baq · a year ago
Your thoughts will be replaced by <thoughts></thoughts>. For your convenience.
ragazzina · a year ago
What a strange complaint about AI. This already happens and happened without AI.

>You have an individual and unique way of speaking and writing? You're going to wish your e-mail finds people well, corporate-monkey.

jollyllama · a year ago
Is it really so strange to complain about the nudging towards the phenomenon of which you speak?
ckozlowski · a year ago
Newspeak.
normalaccess · a year ago
laurent_du · a year ago
I wouldn't say the author's style is unique or individual in any way. Every single tumblr blog sounds like that. You could easily create a "make edgy" function that would feed your formal writings and turn them into that kind of prose. Is it better or worth than "polish"? There's no substantial difference. The "polish" version sure sounds less exhausting than the original.
roywiggins · a year ago
your writing style doesn't have to be "unique" or "original" to be yours
itishappy · a year ago
It's personal. "Unique" and "individual" might not be the best words to describe it, but it's clearly a style they've intentionally adopted. They appear to have been quite successful for it to!

Deleted Comment

benreesman · a year ago
Digital culture was fake and performative and insincere enough before Turboclippy: fuck that with something sharp.

It feels like the whole world is turning into an HR department premised on the ideological axiom that killing one man is a murder but killing a million is a statistic.

incognito124 · a year ago
I truly appreciate the term Turboclippy and will be using it from now on
polynomial · a year ago
Well that's just corporatism taken to the extreme which everything is.
causal · a year ago
I think LLMs are transformative but it's incredible to me how unimaginative most product managers have been. It reminds me of the 90s when people discovered GIFs can be put on web pages so every page had to have a hundred of them. It was tacky, as is most embedded AI.
munificent · a year ago
> I think LLMs are transformative

So is a landmine.