Readit News logoReadit News
ryeats · a month ago
You know that teammate that makes more work for everyone else on the team because they do what they are asked to do but in the most buggy and incomprehensible way, that when you finally get them to move on to another team and you realize how much time you spent corralling them and fixing their subtle bugs and now when they are gone work doesn't seem like so much of a chore.

That's AI.

Spooky23 · a month ago
Just like a poorly managed team, you need to learn how to manage AI to get value from it. All ambiguous processes are like this.

In my case, I find the value with LLMs with respect to writing is consolidation. Use it to make outlines, not writing. One example is I record voice memos when driving or jogging and turn them into documents that can be the basis for all sorts of things. End of the day it saves me alot of time and arguably makes me more effective.

AI goes bad because it’s not smart, and it will pretend that it is. Figure out the things it does well for your scenario and exploit it.

DavidPiper · a month ago
We need to update Hanlon's Razor: Never attribute to AI that which is adequately explained by incompetence.
xerox13ster · a month ago
And just like the original Hanlon’s Razor, this is not an excuse to be stupid or incompetent.

It is not a reason to accept stupidity or incompetence. We should reject these things and demand better.

chistev · a month ago
Thank you.
blibble · a month ago
> You know that teammate

now imagine he can be scaled indefinitely

you thought software was bad today?

imagine Microsoft Teams in 5 years time

darthcircuit · a month ago
I’m not even looking forward to Microsoft teams on Monday.
ThatMedicIsASpy · a month ago
I only need to look at the past 5 years of Windows
bambax · a month ago
I'm extremely wary of AI myself, especially for creative tasks like writing or making images, etc., but this feels a little over the top. If you let it run wild then yes the result is disaster, but for well defined jobs with a small perimeter AI can save a lot of time.
runiq · a month ago
In the context of code, where review bandwidth is the bottleneck, I think it's spot on. In the arts, comparatively -- be they writing, drawing, or music -- you can feel almost at a glance that something is off. There's a bit of a vibe check thing going on, and if that doesn't pass, it's back to the drawing board. You don't inherit technical debt like you do with code.
0xEF · a month ago
You are not wrong, but I pose the argument that too many people approach Gen AI as a replacement instead of a tool, and therein lies the root of the problem.

When I use Claude for code, for example, I am not asking it to write my code. I'm asking it to review what I have written and either suggest improvements or ways to troubleshoot a problem I am having. I also don't always follow its advice, either, but that depends on how much I understand the reply. Sometimes it outputs something that makes sense based on my current skill level, sometimes it proposes things that I know nothing about, in which case I ask it to break it down further so I can go search the Internet for more info and see if I can learn more, which pushes the limits of my skill level.

It works well, since my goal is to improve what I bring to the table and I have learned a lot, both about coding and about prompt engineering.

When I talk to other people, they accuse me of having the AI do all the work for me because that's how they approach their use of it. They want the AI to produce the whole project, as opposed to just using it as a second brain to offload some mental chunking. That's where Gen AI fails and the user spends all their time correcting convoluted mistakes caused by confabulation, unless they're making a simple monolithic program or script, but even then there's often hiccups.

Point is, Gen AI is a great tool, if you approach it with the right mindset. The hammer does not build the whole house, but it can certainly help.

cardanome · a month ago
Generative AI is like micromanaging an talented Junior Dev that never improves. And I mean micromanaging to such a toxic degree that not human would ever put up with that.

It works but it simply not what most people want. If you love to code then you just abstracted away the most fun parts and have to only do the boring parts now. If you love to manage, well managing actual humans and seeing them grow and become independent is much more fulfilling.

On a side note, I feel like prompting and context management is something that is easier for me personally as a person with ADHD as I am already used to working with forms of intelligence that are different to my own. I am used to having to explicitly state my needs. My neurotypical co-workers get frustrated that the LLM can't read their minds and always tell me that it should know what they want. When it nudge them to give it more context and explain better what they need they often resist and say they shouldn't have to. Of course I am stereotyping a bit here but still an interesting observation.

Prompting is indeed a skill. Though I believe the skill ceiling will lower once tools get better so I wouldn't bank too much on it. What is going to be valuable for a long time is probably general software architecture skills.

scarecrowbob · a month ago
"Gen AI is a great tool, if you approach it with the right mindset."

People keep writing this sentence as if they aren't talking to the most tool-ed up group of humans in history.

I have no problems learning tools, from chorded key shortcuts to awk/sed/grep to configuring all three of my text editors (vim, sublime, and my IDE) to work for their various tasks.

Hell, I have preferred ligature fonts for different languages.

Sometimes tools aren't great and make your life harder, and it's not because folks aren't willing to learn the tool.

bdangubic · a month ago
smart people are reading comments like and going “I am glad I am in the same market as people making such comments” :)
ookblah · a month ago
seriously, the near future is going to be:

1) people who reject it completely for whatever reason. 2) people who use it lazily and produce a lot of garbage (lets be honest, this is probably going to happen a lot which is why maybe group #1 hates this future. reminds me of the outsourcing era) 3) people who selectively use it to their advantage.

no point in groups 1 and 3 trying to convince each other of anything.

IAmGraydon · a month ago
I’m glad for now. Understanding how to utilize AI to your advantage is still an edge at the moment, but it won’t be long before almost everyone figures it out.
billy99k · a month ago
You can think that..and you will eventually be left behind. AI is not going anywhere and can be used as a performance booster. Eventually, it will be a requirement for most tech-based jobs.
andersmurphy · a month ago
This reminds me of crypto’s “have fun being poor”. Except now it’s “have fun being left behind/being unemployed”. The more things change the more things stay the same.
sampl3username · a month ago
Left behind what? Consumeristic trash?
ryeats · a month ago
I was being a bit melodramatic, I'll use it occasionally and If AI gets better it can join my team again I don't love writing boilerplate I just know it's not good at writing maintainable code yet.

Deleted Comment

rsynnott · a month ago
I mean, the promoters of every allegedly productivity improving fad have been saying this sort of thing for all of the twenty-odd years I’ve been in the industry.

If LLMs eventually become useful to me, I’ll adopt LLMs, I suppose. Until then, we’ll, fool me once…

BrouteMinou · a month ago
When all you got is pontificating...
threatripper · a month ago
You sound bitter. Did you try using more AI for the bug fixing? It gets better and better.
ryeats · a month ago
My interest tend to be bleeding edge where there is little training data. I do use AI to rubber duck but can rarely use it's output directly.
skydhash · a month ago
Cognitive load are not related to the difficulty of a task. It’s about how much mental energy is spent monitoring it. To reduce cognitive load, you either boost confidence or avoid caring. You can’t have confidence in AI output and most people proposing it looks like they’re preaching to not care about quality (because quantity yay).
Arainach · a month ago
One of the biggest problems with AI is that it doesn't get better and better. It makes the same mistakes over and over instead of learning like a junior eng would.

AI is like the absolute worst outsourced devs I've ever worked with - enthusiastically saying "yes I can do that" to everything and then delivering absolute garbage that takes me longer to fix/convince them to do right than it would have taken for me to just do it myself.

ants_everywhere · a month ago
My writing style is pretty labor intensive [0]. I go through a lot of drafts and read things out loud to make sure they work well etc. And I tend to have a high standard for making sure I source things.

I personally think an LLM could help with some of this, and this is something I've been thinking about the past few days. But I'd have to build a pipeline and figure out a way to make it amplify what I like about my voice rather than have me speak through its voice.

I used to have a sort of puritanical view of art. And I think a younger version of myself would have been low key horrified at the amount of work in great art that was delegated to assistants. E.g. a sculptor (say Michelangelo) would typically make a miniature to get approval from patrons and the final sculpture would be scaled up. Hopefully for major works, the master was closely involved in the scaling up. But I would bet that for minor works (or maybe even the typical work) assistants did a lot of the final piece.

The same happens (and has always happened) with successful authors. Having assistants do bits here or there. Maybe some research, maybe some corrections, maybe some drafts. Possibly relying on them increasingly as you get later in your career or if you're commercially successful enough to need to produce at greater scale.

I think LLMs will obviously fit into these existing processes. They'll also be used to generate content that is never checked by a human before shipping. I think the right balance is yet to be seen, and there will always be people who insist on more deliberate and slower practices over mass production.

[0] Aside from internet comments of course, which are mostly stream of consciousness.

bgwalter · a month ago
Michelangelo worked alone on the David for more than two years:

https://en.wikipedia.org/wiki/David_(Michelangelo)#Process

Maybe later he got lazier. I haven't really heard of famous authors using assistants for drafts instead of research (I don't mean commercial authors like Stephen King).

Even research many authors simply could not afford.

ants_everywhere · a month ago
Maybe Michelangelo was a bad choice, but I hope it's clear from my wording that I was using Michelangelo as an example and not saying anything specific his use of assistants compared to his peers. And David is a masterpiece not a minor work.

I don't see where the article says he worked alone on David. It does seem that he used a miniature (bozzetto) and then scaled up with a pointing machine. One possibility is he made the miniature and had assistants rough out the upscaled copy before doing the fine work himself. Essentially, using the assistants to do the work you'd do on a band saw if you were carving out of wood.

> I haven't really heard of famous authors using assistants for drafts instead of research (I don't mean commercial authors like Stephen King).

Restricting to non-commercial authors would narrow it down since hiring assistants to write drafts probably only makes financial sense if the cost of the assistant is less than the cost of your time it would take drafting.

Alexander Dumas is maybe a bit higher brow than Stephen King

> He founded a production studio, staffed with writers who turned out hundreds of stories, all subject to his personal direction, editing, and additions. From 1839 to 1841, Dumas, with the assistance of several friends, compiled Celebrated Crimes, an eight-volume collection of essays on famous criminals and crimes from European history. https://en.wikipedia.org/wiki/Alexandre_Dumas

But in general I agree, drafts are often the heart of the work and it's where I'd expect masters to spend a lot of their time. Similarly with the statue miniatures.

netule · a month ago
James Patterson comes to mind. He simply writes detailed outlines for the plots of his novels and has other authors write them for him. The books are then published under his name, which is more like a brand at that point.
tombarys · a month ago
Good point! Thanks.

I like the perspective of "choices" during creation. It is an essential principle of the real art that it is a result of thousands/millions of deliberate choices. This is what we admire on the art. If you use mostly machine (or other kind of ways that decide instead and for you) for creation, you as an creator simply do less choices.

In this case, you delegate many of your experienced/crazy/hard decisions to the model (which is based on such decision made already by other artists but combines them in a random way). It is like decompressing JPG – some things are just hallucinated by machine.

From the perspective of pure human creativity, the result is thin, diluted. Even it seems like deliberate. In my opinion art lovers will seek for the dense art made by human, maybe asking even more for some kind of "proof" of the human-based process. What do you think?

Deleted Comment

BolexNOLA · a month ago
At its most basic level I just like throwing things I’ve written at ChatGPT and telling it to rewrite it in “x” voice or tone, maybe condense it or expand on some element, and I just pick whatever word comes to mind for the style. Half the time I don’t really use what it spits out. I am a much stronger editor than I am a writer, so when I see things written a different way it really helps me break through writer’s block or just the inertia of moving forward on something. I just treat it like a mediocre sounding board and frankly it’s been great for that.

When I was in high school I really leaned on friends for edits. Not just because of the changes they would make (though they often did make great suggestions), but for the changes I would make to their changes after. That’s what would inevitably turn my papers from a B into an A. It’s basically the same thing in principle. I need to see something written in a way I would not write it or I start talking in circles/get too wordy. And yes this comment is an example of that haha

mrbluecoat · a month ago
I avoided cell phones too when they first came out. I didn't want the distraction or "digital leash". Now it's a stable fixture in my life. Some technology is simply transformational and is just a matter of time until almost everyone comes to accept it at some level. Time will tell if AI breaks through the hype curve but my gut feeling is it will within 5 years.
GlacierFox · a month ago
My phone is a fixture in my life but spend a lot of effort trying to rid myself of it actually. The thing for me is currently, on the receiving end is that I just don't read anything (apart from books) like it has any semblance of authenticity anymore. My immediate assumption is that a large chunk of it or sometimes the entire piece has been written or substantially altered by AI. Seeing this transferring into the publishing and writing domain is just simply depressing.
uludag · a month ago
I avoided web3/crypto/bitcoin altogether when they came out. I'm happy I did and I don't see myself diving into this world anytime soon. I've also never used VR/AR, never owned a headset, never even tried one. Again, I don't see this changing any time soon.

Some technology is just capital trying to find growth in new markets and doesn't represent a fundamental value add.

rohit89 · a month ago
Crypto is a tech that solves problems that most people don't care about.

VR/AR is tech that is nowhere near ready so its premature to make judgements there. I am bullish on XR.

In terms of value-add, many times technology needs to be invented first before we decide whether its worthwhile. Nobody asked for computers in every home, smartphones in every pocket etc.

scarier · a month ago
I don't totally disagree with you, but I think it's important to note that just because a technology isn't value-adding to you doesn't mean it isn't fundamentally value-adding in general. VR has been game-changing in immersive simulation for me, for example.
cheschire · a month ago
smart phones became a fixture because they were a key enabler for dozens of other things like fitness tracking fads, logging into key services, communication methods that were not available on desktop, etc. If AI becomes a key enabler of business, then yeah people won't have a choice.

I expect this will be around the time that websites are no longer a thing and we see companies directly pumping information into AI agents which are then postured as the only mechanism for receiving certain information.

As an example, imagine Fandango becoming such a powerful movie agent that theaters no longer need websites. You don't ask it questions. Instead, it notifies YOU based on what it knows about your schedule, your preferences, your income, etc. Right around 5pm it says "Hey did you know F1 is showing down the street from you at Regal Cinema in IMAX tonight at 7:30? That will give you time to finish your 30 minute commute and pickup your girlfriend! Want me to send her a notification that you want to do this?"

People install a litany of agents on their smartphones, and they train their agents based on their personal preferences etc, and the agents then become the advertisers directly feeding relevant and timely information to you that maximizes your spend.

MCP will probably kill the web as we know it.

TheOtherHobbes · a month ago
That's not what will happen. The ad tech companies will pivot and start selling these services as neutral helpers, when in fact they'll use their knowledge of your schedule, preferences, and income to spend money on goods and services you don't really want.

It will be controlling and disempowering - manipulative personality-profiled "suggestions" with a much higher click rate than anything we have today.

And the richer you are, the more freedom you'll have to opt out and manage your own decisions.

sampl3username · a month ago
>smart phones became a fixture because they were a key enabler for dozens of other things like fitness tracking fads, logging into key services, communication methods that were not available on desktop, etc. If AI becomes a key enabler of business, then yeah people won't have a choice.

This. I need access to banking , maps and 2FA. If I could use a dumb phone, with just a camera, GPS and whatsapp, I would use it.

coliveira · a month ago
If this happens I have an excellent business strategy. Human concierges that will help people with specific areas of their lives. Sell a premium service where paid humans will interact with all this noise so clients will never have to talk to machines.
ApeWithCompiler · a month ago
True, but at least for me also true: Smartphones are a stable fixture in my life and by now I try to get rid of them as much as possible.
threatripper · a month ago
What AI currently lacks is mainly context. A well trained, experienced human knows their reader very well and knows what they don't need to write. And for what they write they know the tone they need to hit. I totally expect that in the future this will totally turn around, the Author will write the facts and framework with the help of AI and your AI will extract and condense it for your consumption. Your AI knows everything about you. Knows everything you ever consumed. Knows how you think and what it needs to tell you in which tone to give you the best experience. You will be informed better than ever before. The future in AI will be bright!
timeon · a month ago
Analogies are not arguments.
mobeets · a month ago
I’m with you—-I think you did a good job of summarizing all the places that LLMs are super practical/useful, but agreed that for prose (as someone who considers themselves a proficient writer), it just never seems to contribute anything useful. And those who are not proficient writers, I’m sure it can be helpful, but it certainly doesn’t contribute any new ideas if you’re not providing them.
jml78 · a month ago
I am not a writer. My oldest son,16, started writing short stories. He did not use AI in any aspect of the words on the page. I did however recommend that he feed his stories and ask a LLM for feedback on things that are confusing, unclear, or holes in the plot.

Not to take any words it gives but read what it says and decide if those things are true, if so, make edits. I am not saying it is a great editor but it is better than any other resource he has access to as a teenager. Yeah better than me or his mom

moregrist · a month ago
Have you looked for:

- Writing groups. They often have sessions that provide feedback and also help writers find/build a sense of community. Your son would also get to listen to other writers talk about their work, problems they’ve run into and overcome, and other aspects of their craft.

- School (sometimes library) writing workshops. This helps students develop bonds with their peers and helps both students: the ones giving feedback are learning to be better editors.

Both of these offer a lot of value in terms of community building and also getting feedback from people vested in the the craft of writing.

SV_BubbleTime · a month ago
Large Language Model, not Large Fact Model.
zB2sj38WHAjYnvm · a month ago
This is very sad.
tolerance · a month ago
For things like coding LLMs are useful and DEVONThink's recent AI integrations allow me to use local models as something like an encyclopedia or thesaurus to summarize unfamiliar blocks of text. At best I use it like scratch paper.

I formed the habit of exporting entire chats to Markdown and found them useless. Whatever I found of useful from a given response either sparked a superseding thought of my own or was just a reiteration of my own intuitive thoughts.

I've moved from ChatGPT to Claude. The results are practically the same as far as I can tell (although my gut tells me I get better code from Claude) but the I think Anthropic have a better feel for response readability. Sometimes processing a ChatGPT response is like reading a white paper.

Other than that, LLMs get predictable to me after a while and I get why people suspect that they're starting to plateau.

tombarys · a month ago
You are right. It plateaued and even degraded in some way. Or we just got more sensitive to its bullshiting?
tolerance · a month ago
A little bit of both, I think. And I suspect that we aren't going to see another noticeable leap forward until specialized models become commonplace and/or people figure out exactly how LLMs productively fit their interests.
tombarys · a month ago
I am a book publisher & I love technology. It can empower people. I have been using LLM chatbots since they became widely available. I regularly test machine translation at our publishing house in collaboration with our translators. I have just completed two courses in artificial intelligence and machine learning at my alma mater, Masaryk University, and I am training my own experimental models (for predicting bestsellers :). I consider machine learning to be a remarkable invention and catalyst for progress. Despite all this, I have my doubts.
esjeon · a month ago
I know a publisher who translates books (English to Korean). He works alone these days. Using GPT, he can produce a decent-quality first draft within a day or two. His later steps are also vastly accelerated because GPT reliably catches typos and grammar errors. It doesn't take more than a month to translate and print a book from scratch. Marvelous.

But I still don't like that the same model struggles w/ my projects...

tombarys · a month ago
This is a topic for another article! We tried hard to use (test) translation tools in some real-life scenarios. The results seemed like they can help first but then we spent a lot of time again to reach our standards. As a side-effect, our translators and editors felt they are losing their own creativity and sensitivity in that process.

We are a publisher which succeeded due to the highest-quality translations. Our readers appreciated it and ask for it. Czech language is very rich and these machines are not able to make the most of it. The non-fiction sphere needs a lot of fact-checking e.g. in local and field terminology too. So even we can imagine the process of translation could be technically shortened by machine translation, it would probably ruin our reputation in a long term.

At least for now...

Deleted Comment

jdietrich · a month ago
As a professional writer, the author of this post is likely a better writer than 99.99% of the population. A quick skim of his blog suggests that he's comfortably more intelligent than 99% of people. I think it's totally unsurprising that he isn't fully satisfied with the output of LLMs; what is remarkable is that someone in that position still finds plenty of reasons to use them.

Now consider someone further down the scale - someone at the 75th, 50th or 25th percentile. The output of an LLM very quickly goes from "much worse than what I could produce" to "as good as anything I could produce" to "immeasurably better than anything I could hope to ever produce".

nerevarthelame · a month ago
I'm worried that an increasing number of people are relying on LLMs for things as fundamental to daily life as expressing themselves verbally or critical thinking.

Perhaps LLMs can move someone's results from the 25th percentile to the 50th for a single task. (Although there's probably a much more nuanced discussion to be had about that: people with poor writing skills can still have unique, valuable, and interesting perspectives that get destroyed in the median-ization of current LLM output.) But after a couple years of using LLMs regularly, I fear that whatever actual talent they have will atrophy below their starting point.

kaliszad · a month ago
The author is a great guy and indeed quite smart and meticulous in areas he cares about deeply. He is a published author with a reasonably popular book considering the market size: https://www.melvil.cz/kniha-jak-sbalit-zenu-20/ he has edited probably more books than he would like to admit as well. It's not surprising he is able to write a good article.

However good writing is a skill you can get good at with enough practice. Read a lot, write a lot of garbage, consult more experienced writers and eventually you will write readable articles soon. Do 10-100x more of that and you will be pretty great. The rest is some kind skill and experience in many other fields than writing which will inform how to write even better. Some of it is intelligence, luck, great mentors and perhaps something we call talent even. As with most things you can get far just by working diligently a lot.

ThrowawayR2 · a month ago
> "Now consider someone further down the scale - someone at the 75th, 50th or 25th percentile. The output of an LLM very quickly goes from "much worse than what I could produce" to "as good as anything I could produce" to "immeasurably better than anything I could hope to ever produce""

That does, to my mind, explain all the vengeful "haw haw, you're all going to get left behind" comments from some LLM proponents. They actually do get benefit from LLMs, unlike the highest part of the scale who are overrepresented on HN, without realizing what that implies and they think they can overtake the highest part of the scale by using them. Well, we'll see.

antegamisou · a month ago
Idk, LLM writing style somehow almost always ends up sounding like an insufferable smartass Redditor spiel. Maybe it's only appealing to the respective audience.
johnnyfived · a month ago
What's interesting about thinking of code as art is that there rarely a variety of ways of implementing logic that's all optimal. So if you decide on the implementation and have a LLM code it, you likely won't need to make major changes given the right guidelines (I just mean like a single script, for the sake of comparison).

Writing is entirely different, and for some reason, generic writing even when polished (ChatGPT-esque tone) is so much more intolerable than say AI-generated imagery. Images can blend in the background, reading takes active processing so we're much more sensitive. And for the end user of a product, they care 0 or next to 0 about AI code.

tombarys · a month ago
> "Images can blend in the background, reading takes active processing so we're much more sensitive. And for the end user of a product, they care 0 or next to 0 about AI code."

Very interesting point!