> It’s like my whole computer is a toddler screaming “LET ME DO IT!” every time I try to create something.
Every autocorrect or auto suggestion ever has felt like this to me, but the volume has been turned up to 11. The otherwise drab Adobe Reader is covered with colorful sparkly buttons and popups suggesting I need not even read the document because it can give me “insights.” First, no you may not ready my proprietary document, nor do I suspect most people using this particular software - I only have it for digital signatures - have permission to share IP with a third party. But mostly, it can sometimes be a useful tool, and the fact everyone is shoving it in my face reeks of desperation.
Thing is, we've been here before in a much more limited way; people _hated_ it when Microsoft's demonic paperclip did this in Office. "It looks like you're writing a letter". _Hated_ it.
It is unclear what the industry thinks has changed, that people will now welcome "It looks like you're [whatever]".
This forum (HN) attracts certain population that wants to do things, to understand, to share relatively well based opinions and have a discussion.
But look around, look at the new hires in the other departments. And by new I mean young, in their 20. A lot of them welcome this kind of things, they evaluate by popularity and likes. The marketing begin the AI bubble knows this, and so it pushes for it. Make it popular is more important than make it useful, because there is a tipping point were is popular enough that we capitulate.
The goal with most of these AI features is not to solve a real problem users are having, it's to add a feature that uses AI. This will not change because it's not wrong of the individuals making the decision. The project manager gets to say he shipped a cutting-edge AI project. The developers all get to put experience working with very hireable technologies at a serious company on their resume. There will be no adverse impact to the bottom line, because the cost to develop the shitty AI feature is a drop in the bucket, and the cost to create a competing product that accomplishes the core thing users are using that product for but without feature bloat would be very high, and probably unsuccessful since "less feature bloat" has never been sufficient to break the static friction threshold for users to switch.
So it won't change, because there is no lesson to learn. No individual involved acted irrationally.
It's a design that's in companies' best interests. You can have a computer that's a "friend." One that you trust but ultimately has a mind of its own. This contrasts with a computer that's merely a tool, that serves you exclusvely at your pleasure and has zero agency of its own.
Which approach gives companies more control over users? Which one allows companies to sell that access to the highest bidder?
Clippy (and his predecessors, he wasn't one of the first avatars for the feature) might not have been so bad, but marketing got hold of it and decided it didn't pop up often enough for them to really make a big thing of, so it was tuned up to an irritating level.
> It is unclear what the industry thinks has changed
The demographics of computer (and other device) use have changed massively since the late 90s, and the suggestion engines are much more powerful.
I still want it all to take a long walk off a short peer, but a lot of people seem happy with it bothering them.
If the automation is much better at the task than I am, then I am happy to donate it the responsibility: It's a matter of accuracy. Clippy kind of sucked even when he was right about what I was trying to do. For many things, the LLMs are getting good enough to outperform me
The customer base for computing has expanded probably 3 or 4 fold or more from those windows xp days in the US. Maybe for the subset of the population that was word processing back then it was annoying. But now we are looking at a different pie entirely where that subset of annoyed power users is but a tiny sliver. There are people today who have no experience even with a desktop os.
Thing is, Gmail's been doing this ~forever with quick replies to emails, now it's just doing longer replies instead of "that's great, thanks" level of replies.
But clippy didn't write the letter for me = if I can be lazy and AI formats what I'm communicating in a way that is accessible to other people, then why should I care.
After a recent Show HN, I got an email from someone saying that they'd set up a page for my 'product' on their product showcase startup site. I followed the link and saw my open-source project pitched as ChatGPT slop. It felt like a violation because it wasn't just an aggregated link, but a rewrite of my readme with an associated 'pitch'.
I recommend reporting this to dang at hn@ycombinator.com. I imagine that he'd be interested in someone crawling HN in order to send automated lead generation spam.
Open source has been looking better and better lately because it's not in a mad rush to bolt "AI" features onto it (an LLM will do something) and then shove a huge amount of interface in your face to try to get you to use it.
On some level it's enormously baffling that this was the thing they decided they needed to do...conversely Adobe Reader on my phone won't shutup about liquid mode either (which uploads to Adobe servers) and Microsoft and Google's solution to "people don't want to use our AI assistants" was to ensure they literally can't be disabled or removed.
> no you may not ready my proprietary document, nor do I suspect most people using this particular software - I only have it for digital signatures - have permission to share IP with a third party.
This is a massive liability that almost everybody seems to be ignoring. My employer has a ban on using AI on IP until this is properly resolved, because we actually care about it leaking.
Maybe an Information Commissioner will get round to issuing a directive some time in the mid-2030s about how none of this complies with GDPR.
> My employer has a ban on using AI on IP until this is properly resolved, because we actually care about it leaking.
Yet I can almost guarantee you that someone has put something they shouldn't through ChatGPT, because they either feel like it's a dumb rule, that should not apply to them, or they where in a hurry and what are the odds of them getting caught.
I think in general, no major liability issue will come up:
- if everyone is doing it, you can't really fault anyone
- on some level we are, or will be, kinda dependent on that AI and opting out will probably be made unpleasant via dark patterns as usual
- no pushback to every piece of software, including at the operating system level, slurping all the keystrokes and data, let alone the data that's already in the cloud
- big tech knows everything about us but to my surprise no major public leak has happened, i.e. one where you really can see your neighbor's private data without buying leaked data from someone on the dark web or wherever
- things are moving too fast, and you don't know if you can afford to have your programmers not use tomorrow's AI, for example, so your "bans" will have to be soft etc., this limits the potential pushback and outrage
Obviously the AI version is bland and terrible, but arguably more importantly it has also completely changed the meaning of the message. The AI version:
- apologizes
- implies the recipient was "promised" this email as a "response" to something
- blames a hectic schedule
- invites questions
None of this was in or was even implied in the original. This is not a "polished" version, it's just a straight-up different email. I thought that style transfer while maintaining meaning was one of the few things LLMs can be good at, but this example fails even that low bar.
A lot of people who want to replace most human interactions with LLMs assume that there is some objective set of cultural values true in all contexts, and that it is good and easy to encode these as axioms into an AI.
And those ideas seem far more in line with millenial Silicon Valley culture. It's weird when they expect Germans to fake that sort of overly formal, overly cheery tone. People just don't talk like that.
“OK Fine. But could you at least yell at me in corp speak?”
It's no surprise LLMs are using corp speak and vapid marketing prose as a template. There is so much of it out there.
This is from that Autodesk post last week where they admitted their mistake and… Nope it's corp speak:
“We are excited to share some important updates regarding Archiving and our Idea Boards and Forums that aim to enhance your experience and ensure valuable content remains accessible. Please read the details below to understand how these changes might impact you.”
Barf. But to an LLM this looks like a human communicating in a meaningful way.
There is no way in hell anyone who knows me would get that email and not think I’d been abducted
This person cares about not putting up a fake identity. That's pretty cool, but social media has exposed that a large number of people are perfectly fine presenting an illusion. People will have no shame passing off well written things as an output of their talent and hard work. Digital makeup has no bounds.
If you care about putting up a fake identity this is still bad. Social media is all about being distinct and grabbing attention. Getting samified into a bland featureless identity isn't the same as as carefully crafting a persona to maximize clicks
> People will have no shame passing off well written things as an output of their talent and hard work.
Sometimes I don't want to waste my time crafting a professional e-mail to a bunch of jerks full of themselves. Maybe I want to write it as it comes off my brain, and let my digital scribe to reformulate it so that the people reading it feel respected/validated/flattered. Am I putting up a fake identity then? Am I presenting an illusion of professionalism? Maybe writing "Best regards" instead of "Bye" is the facade of professionalism in the first place.
"Best Regards" vs "Bye" is one thing, but unless you're the owner of the company, sending a client "fuck you, pay me" just isn't professional and is probably going to get you fired.
I mean, I hear that. I was asked to be "nicer" in emails once, and when pressed for specific changes, was finally asked to occasionally say "Thanks!" as my sign-off instead of "Thanks,".
The "bunch of jerks full of themselves" likely aren't reading the emails now; we're burning immense amounts of energy for your politeness to be generated, and distilled out at the other end into a no-nonsense summary missing all the niceties another AI just added.
It's obviously a personal thing, but I even feel a little guilty clicking the autosuggested "thanks" when responding to a text. Everyone has the threshold they're comfortable with.
With the normalization of default workflow to chuck all comms through an LLM filter settling in these days, I don't think it's even people trying to pass off illusions as their own persona. All it takes is a copy-paste and hitting the Make-Me-Some-Text button. I'm sure the responses will be frustratingly amusing if you were to press them and call them out on it (including trying to pass off the illusion).
Many people didn't think about what they are trying to convey (or self-analysed how they present themselves) when drafting correspondence in the past; now, many people think just as not-hard and often continue, like before, to neglect to meaningfully proofread whatever they had the LLMs generate for them before hitting Send.
Of course, I don't like it. But in some ways, it's just not a whole lot different from what it was before in that you can often still tell apart the people who care to be articulate from those who don't. Though, I feel bad for people disproportionately waylaid by the new paradigm like the bug/security responders on the curl project.
At a high level I see convergence of styles, topics, behaviors to a generic form, both in "AI" and social media. Which to me suggest that the "AI" solutions are doing exactly what we would do ourselves, just faster.
I've only recently started using AI, and have discovered my use or rejection of it is predicated on my feelings for the task. This argument of "authenticity" really resonates.
I'm a manager, so when I'm sending emails to a customer or talking with one of my reports, I care deeply - so you might get some overwrought florid prose, but it's my overwrought florid prose.
On the other hand, I have to lead a weekly meeting that exists solely to provide evidence for compliance reasons, something out of the CIA's sabotage field manual that David Graeber has probably written about. But is now a thirty second exercise in uploading a transcript to ChatGPT, prompting for three evidentiary bulletpoints, and pasting the output in a wiki no human will ever read.
I was thinking about the authenticity of my writing earlier this week and wondering why I have no problem accepting code from an AI and committing it, but I find the idea of passing off an AI's writing as my own feels not just wrong, but immoral on the level of purposeful plagiarism. I feel a distinct difference, but I'm not particular clear why. I'm okay with sharing AI writing, but only when I've clearly communicated it was written by AI.
Probably related to why I can copy a piece of code from elsewhere (with sufficient work to verify it does what I expect and only what I expect) but I don't copy a quote and use it as my own. My words are my words. My code doesn't have the same guarantee.
Code uses a simplified set of instructions to instruct a computer to do things. Hopefully these instructions can be understood and maintained by a human.
Writing uses the entire breadth of human language to convey information between human beings with unique and complex understandings of the universe. If those words come from a machine that is not you - that is not someone - you ought to disclose it.
It's probably because communication is a complex dance between humans, where you're constantly signaling that you're part of some group with the other person. Think of any profession or team, where members share common ways of speaking: jargon, inside jokes, terms of art, terms of endearment, etc. It's useful for cohesion, trust, and efficiency because you're assured that the person you're talking to is indeed "one of us."
If you use an AI to communicate, then you either fail to mimic those group membership signals and you look like an idiot. Or you succeed and show that a machine can fool humans at this game. Any grifter can come along and establish trust in a group by relying on this tech. This dance that humans have been doing since the dawn of time suddenly breaks down, and that doesn't feel good.
That's also what I do. I hand-write every email because these words have my name under them. On the other hand, if I'm asking the tax office to issue a specific document, I let AI handle it.
I wonder how people feel about "dumber" tools like hemingway.app that make mechanical suggestions for readability like suggesting simple synonyms and highlighting sentences that are too long. I've used it for writing documents that important and I knew a lot of people would read.
AI "polishing" tools are essentially a form of anti-compression. Lets take some information represented concisely and needlessly pad it with formalities and waffle so it appears more "professional" (whilst also throwing away useful "metadata" like message tone).
No doubt the recipient will also be using some form of AI summarization that strips away all that added "polish" - making the whole exercise entirely redundant!
> No doubt the recipient will also be using some form of AI summarization that strips away all that added "polish" - making the whole exercise entirely redundant!
Not entirely, there’s still the energy usage and stock price increases. All because everyone’s too anxious to just talk to each other directly.
My (evidently mentally disabled) previous manager was so proud of being able to use AI to generate the bullshit he sent out to clients. What the morons are really doing is proving they're useless, let them.
I use Grammarly to check for errors I make during writing more serious stuff (English is not my native language), but any suggestion it sends in my way changes the tone of the text so much that it sounds like it's written by a PR agency with a fake, forced attitude; sounding bland and colorless.
So no, thank you. Correct my textbook punctuation mistakes, and leave my wordy and "not positive enough" sentences to me.
I'm getting increasingly irritated by Grammarly's attempts to boringify my writing. I've even considered doing away with it entirely, even if it means I have to do my own spell-checking.
I'm working on a dystopia where the resistance is using text-in-text steganography to coordinate, so unpolished communication is flagged for extra scrutiny because all those stylistic choices might be hiding something.
I wouldn't say the author's style is unique or individual in any way. Every single tumblr blog sounds like that. You could easily create a "make edgy" function that would feed your formal writings and turn them into that kind of prose. Is it better or worth than "polish"? There's no substantial difference. The "polish" version sure sounds less exhausting than the original.
It's personal. "Unique" and "individual" might not be the best words to describe it, but it's clearly a style they've intentionally adopted. They appear to have been quite successful for it to!
Digital culture was fake and performative and insincere enough before Turboclippy: fuck that with something sharp.
It feels like the whole world is turning into an HR department premised on the ideological axiom that killing one man is a murder but killing a million is a statistic.
I think LLMs are transformative but it's incredible to me how unimaginative most product managers have been. It reminds me of the 90s when people discovered GIFs can be put on web pages so every page had to have a hundred of them. It was tacky, as is most embedded AI.
Every autocorrect or auto suggestion ever has felt like this to me, but the volume has been turned up to 11. The otherwise drab Adobe Reader is covered with colorful sparkly buttons and popups suggesting I need not even read the document because it can give me “insights.” First, no you may not ready my proprietary document, nor do I suspect most people using this particular software - I only have it for digital signatures - have permission to share IP with a third party. But mostly, it can sometimes be a useful tool, and the fact everyone is shoving it in my face reeks of desperation.
The tech industry is in real trouble.
It is unclear what the industry thinks has changed, that people will now welcome "It looks like you're [whatever]".
This forum (HN) attracts certain population that wants to do things, to understand, to share relatively well based opinions and have a discussion.
But look around, look at the new hires in the other departments. And by new I mean young, in their 20. A lot of them welcome this kind of things, they evaluate by popularity and likes. The marketing begin the AI bubble knows this, and so it pushes for it. Make it popular is more important than make it useful, because there is a tipping point were is popular enough that we capitulate.
Turns out that Idiocracy is not that far behind (https://www.imdb.com/title/tt0387808/)
So it won't change, because there is no lesson to learn. No individual involved acted irrationally.
Which approach gives companies more control over users? Which one allows companies to sell that access to the highest bidder?
> It is unclear what the industry thinks has changed
The demographics of computer (and other device) use have changed massively since the late 90s, and the suggestion engines are much more powerful.
I still want it all to take a long walk off a short peer, but a lot of people seem happy with it bothering them.
Maybe it isn’t the tech industry, and just consumer-facing apps.
On some level it's enormously baffling that this was the thing they decided they needed to do...conversely Adobe Reader on my phone won't shutup about liquid mode either (which uploads to Adobe servers) and Microsoft and Google's solution to "people don't want to use our AI assistants" was to ensure they literally can't be disabled or removed.
This is a massive liability that almost everybody seems to be ignoring. My employer has a ban on using AI on IP until this is properly resolved, because we actually care about it leaking.
Maybe an Information Commissioner will get round to issuing a directive some time in the mid-2030s about how none of this complies with GDPR.
Yet I can almost guarantee you that someone has put something they shouldn't through ChatGPT, because they either feel like it's a dumb rule, that should not apply to them, or they where in a hurry and what are the odds of them getting caught.
- if everyone is doing it, you can't really fault anyone
- on some level we are, or will be, kinda dependent on that AI and opting out will probably be made unpleasant via dark patterns as usual
- no pushback to every piece of software, including at the operating system level, slurping all the keystrokes and data, let alone the data that's already in the cloud - big tech knows everything about us but to my surprise no major public leak has happened, i.e. one where you really can see your neighbor's private data without buying leaked data from someone on the dark web or wherever
- things are moving too fast, and you don't know if you can afford to have your programmers not use tomorrow's AI, for example, so your "bans" will have to be soft etc., this limits the potential pushback and outrage
- apologizes
- implies the recipient was "promised" this email as a "response" to something
- blames a hectic schedule
- invites questions
None of this was in or was even implied in the original. This is not a "polished" version, it's just a straight-up different email. I thought that style transfer while maintaining meaning was one of the few things LLMs can be good at, but this example fails even that low bar.
AI exists in a Matrix where toxic positivity is enforced with electric shocks.
Deleted Comment
(tyrna be funny not patronizing. but the machinery of subjectivity production is ofc very real)
Deleted Comment
It's no surprise LLMs are using corp speak and vapid marketing prose as a template. There is so much of it out there.
This is from that Autodesk post last week where they admitted their mistake and… Nope it's corp speak:
“We are excited to share some important updates regarding Archiving and our Idea Boards and Forums that aim to enhance your experience and ensure valuable content remains accessible. Please read the details below to understand how these changes might impact you.”
Barf. But to an LLM this looks like a human communicating in a meaningful way.
This person cares about not putting up a fake identity. That's pretty cool, but social media has exposed that a large number of people are perfectly fine presenting an illusion. People will have no shame passing off well written things as an output of their talent and hard work. Digital makeup has no bounds.
Sometimes I don't want to waste my time crafting a professional e-mail to a bunch of jerks full of themselves. Maybe I want to write it as it comes off my brain, and let my digital scribe to reformulate it so that the people reading it feel respected/validated/flattered. Am I putting up a fake identity then? Am I presenting an illusion of professionalism? Maybe writing "Best regards" instead of "Bye" is the facade of professionalism in the first place.
When you did it manually you were putting up a fake identify. ofc using an AI to fake you being fake for work would be fake.
The idea that our work personas aren't at least a little fake is toxic. Depending on where you work it might be a lot fake.
Wear your character as lightly as a cap, don't get tricked into method acting.
The "bunch of jerks full of themselves" likely aren't reading the emails now; we're burning immense amounts of energy for your politeness to be generated, and distilled out at the other end into a no-nonsense summary missing all the niceties another AI just added.
Many people didn't think about what they are trying to convey (or self-analysed how they present themselves) when drafting correspondence in the past; now, many people think just as not-hard and often continue, like before, to neglect to meaningfully proofread whatever they had the LLMs generate for them before hitting Send.
Of course, I don't like it. But in some ways, it's just not a whole lot different from what it was before in that you can often still tell apart the people who care to be articulate from those who don't. Though, I feel bad for people disproportionately waylaid by the new paradigm like the bug/security responders on the curl project.
At a high level I see convergence of styles, topics, behaviors to a generic form, both in "AI" and social media. Which to me suggest that the "AI" solutions are doing exactly what we would do ourselves, just faster.
Dead Comment
I'm a manager, so when I'm sending emails to a customer or talking with one of my reports, I care deeply - so you might get some overwrought florid prose, but it's my overwrought florid prose.
On the other hand, I have to lead a weekly meeting that exists solely to provide evidence for compliance reasons, something out of the CIA's sabotage field manual that David Graeber has probably written about. But is now a thirty second exercise in uploading a transcript to ChatGPT, prompting for three evidentiary bulletpoints, and pasting the output in a wiki no human will ever read.
Probably related to why I can copy a piece of code from elsewhere (with sufficient work to verify it does what I expect and only what I expect) but I don't copy a quote and use it as my own. My words are my words. My code doesn't have the same guarantee.
Writing uses the entire breadth of human language to convey information between human beings with unique and complex understandings of the universe. If those words come from a machine that is not you - that is not someone - you ought to disclose it.
If you use an AI to communicate, then you either fail to mimic those group membership signals and you look like an idiot. Or you succeed and show that a machine can fool humans at this game. Any grifter can come along and establish trust in a group by relying on this tech. This dance that humans have been doing since the dawn of time suddenly breaks down, and that doesn't feel good.
No doubt the recipient will also be using some form of AI summarization that strips away all that added "polish" - making the whole exercise entirely redundant!
It just feels absurd.
The more the first pads, the more the second is needed.
If AI really were Intelligent, I'd fear it's an organism making sure it's needed in the ecosystem.
Not entirely, there’s still the energy usage and stock price increases. All because everyone’s too anxious to just talk to each other directly.
So no, thank you. Correct my textbook punctuation mistakes, and leave my wordy and "not positive enough" sentences to me.
Though, its stenography is a bit more obvious given the "you've got to be able to read it".
The kindle version of the book starts with https://imgur.com/uIBjwlQ
This would give you the opportunity to have another ending to the book.
>You have an individual and unique way of speaking and writing? You're going to wish your e-mail finds people well, corporate-monkey.
https://www.youtube.com/watch?v=NV0CtZga7qM
Deleted Comment
It feels like the whole world is turning into an HR department premised on the ideological axiom that killing one man is a murder but killing a million is a statistic.
So is a landmine.