Readit News logoReadit News
cheevly commented on Turning ChatGPT's "Saved Memory" into a Persistent, Self-Updating Runtime Tool    · Posted by u/Alchemical-Gold
cheevly · 11 days ago
Yes, but at profoundly larger scale, outside of ChatGPT in my own agentic architecture.
cheevly commented on AI is propping up the US economy   bloodinthemachine.com/p/t... · Posted by u/mempko
lisbbb · 19 days ago
Everything I have worked on as a fullstack developer for multiple large companies over the past 25 years tells me that AI isn't just going to replace a bunch of workers. The complexity of those places is crazy and it takes teamwork to keep them running. Just look what happens internally over a long holiday weekend at most big companies, they are often just barely meeting their uptime guarantees.

I was recently at a big, three-letter pharmacy company and I can't be specific, but just let me say this: They're always on the edge of having the main websites going down for this or that reason. It's a constant battle.

How is adding more AI complexity going to help any of that when they don't even have a competent enough workforce to manage the complexity as it is today?

You mention VR--that's another huge flop. I got my son a VR headset for Christmas in like 2022. It was cool, but he couldn't use it long or he got nauseaus. I was like "okay, this is problematic." I really liked it in some ways, but sitting around with that goofy thing on your head wasn't a strong selling point at all. It just wasn't.

If AI can't start doing things with accuracy and cleverness, then it's not useful.

cheevly · 19 days ago
You have it so backwards. The complexity of those places is exactly why AI will replace it.
cheevly commented on LLM Inevitabilism   tomrenner.com/posts/llm-i... · Posted by u/SwoopsFromAbove
atleastoptimal · a month ago
AI is being framed as the future because it is the future. If you can't see the writing on the wall then you surely have your head in the sand or are seeking out information to confirm your beliefs.

I've thought a lot about where this belief comes from, that belief being the general Hacker News skepticism towards AI and especially big tech's promotion and alignment with it in recent years. I believe it's due to fear of irrelevance and loss of control.

The general type I've seen most passionately dismissive of the utility of LLM's are veteran, highly "tech-for-tech's sake" software/hardware people, far closer Wozniak than Jobs on the Steve spectrum. These types typically earned their stripes working in narrow intersections of various mission-critical domains like open-source software, systems development, low-level languages, etc.

To these people, a generally capable all-purpose oracle capable of massive data ingestion and effortless inference represents a death knell to their relative status and value. AI's likely trajectory heralds a world where intelligence and technical ability are commodified and ubiquitous, robbing a sense purpose and security from those whose purpose and security depends on their position in a rare echelon of intellect.

This increasingly likely future is made all the more infuriating by the annoyances of the current reality of AI. The fact that AI is so presently inescapable despite how many glaring security-affecting flaws it causes, how much it propagates slop in the information commons, and how effectively it emboldens a particularly irksome brand of overconfidence in the VC world is preemptive insult to injury in the lead up a reality where AI will nevertheless control everything.

I can't believe these types I've seen on this site aren't smart enough to avoid seeing the forest for the trees on this matter. My Occam's razor conclusion is that most are smart enough, they just are emotionally invested in anticipating a future where the grand promises of AI will fizzle out and it will be back to business as usual. To many this is a salve necessary to remain reasonably sane.

cheevly · a month ago
Well-said and spot on.
cheevly commented on LLM Inevitabilism   tomrenner.com/posts/llm-i... · Posted by u/SwoopsFromAbove
mlsu · a month ago
I hate AI. I'm so sick of it.

I read a story about 14 year olds that are adopting AI boyfriends. They spend 18 hours a day in conversation with chatbots. Their parents are worried because they are withdrawing from school and losing their friends.

I hate second guessing emails that I've read, wondering if my colleagues are even talking to me or if they are using AI. I hate the idea that AI will replace my job.

Even if it unlocks "economic value" -- what does that even mean? We'll live in fucking blade runner but at least we'll all have a ton of money?

I agree, nobody asked what I wanted. But if they did I'd tell them, I don't want it, I don't want any of it.

Excuse me, I'll go outside now and play with my dogs and stare at a tree.

cheevly · a month ago
You hate AI and want to go outside and stare at a tree? How are posts like this on HACKERnews? What is the point of all these types of posts on a site that is literally about hacking technology?
cheevly commented on LLM Inevitabilism   tomrenner.com/posts/llm-i... · Posted by u/SwoopsFromAbove
imiric · a month ago
> Look for frantic efforts by companies to offload responsibility for LLM mistakes onto consumers.

Not just by companies. We see this from enthusiastic consumers as well, on this very forum. Or it might just be astroturfing, it's hard to tell.

The mantra is that in order to extract value from LLMs, the user must have a certain level of knowledge and skill of how to use them. "Prompt engineering", now reframed as "context engineering", has become this practice that separates anyone who feels these tools are wasting their time more than they're helping, and those who feel that it's making them many times more productive. The tools themselves are never the issue. Clearly it's the user who lacks skill.

This narrative permeates blog posts and discussion forums. It was recently reinforced by a misinterpretation of a METR study.

To be clear: using any tool to its full potential does require a certain skill level. What I'm objecting to is the blanket statement that people who don't find LLMs to be a net benefit to their workflow lack the skills to do so. This is insulting to smart and capable engineers with many years of experience working with software. LLMs are not this alien technology that require a degree to use correctly. Understanding how they work, feeding them the right context, and being familiar with the related tools and concepts, does not require an engineering specialization. Anyone claiming it does is trying to sell you something; either LLMs themselves, or the idea that they're more capable than those criticizing this technology.

cheevly · a month ago
Unless you have automated fine-tuning pipelines that self-optimize optimize models for your tasks and domains, you are not even close to utilizing LLMs to their potential. But stating that you don’t need extensive, specialized skills is enough of a signal for most of us to know that offering you feedback would be fruitless. If you don’t have the capacity by now to recognize the barrier to entry, experts are not going to take the time to share their solutions with someone unwilling to understand.
cheevly commented on I am uninstalling AI coding assistants from my personal computer   sam.sutch.net/posts/unins... · Posted by u/ssutch3
bluefirebrand · 2 months ago
I haven't had nearly the same experience of success with AI.

I'm often accused of letting my skepticism hold me back from really trying it properly, and maybe that's true. I certainly could not imagine going months without writing any code, letting the AI just generate it while I prompt

My work is pushing these tools hard and it is taking a huge toll on me. I'm constantly hearing how life changing this is, but I cannot replicate it no matter what I do

I'm either just not "getting it", or I'm too much of a control freak, or everyone else is just better than I am, or something. It's been miserable. I feel like I'm either extremely unskilled or everyone else is gaslighting me, basically nowhere in between

I have not once had an LLM generate code that I could accept. Not one time! Every single time I try to use the LLM to speed me up, I get code I have to heavily modify to correct. Sometimes it won't even run!

The advice is to iterate, but that makes no sense to me! I would easily spend more time iterating with the LLM than just writing the code myself!

It's been extremely demoralizing. I've never been unhappier in my career. I don't know what to do, I feel I'm falling behind and being singled out

I probably need to change employers to get away from AI usage metrics at this point, but it feels like it's everyone everywhere guzzling the AI hype. It feels hopeless

cheevly · 2 months ago
And the irony is that those of us using AI to amplify our output to produce at exponential speeds feel like your comments are gaslighting us instead! Ive never seen such an outright divide in practitioners of a technology in terms of perception and outcomes. I got into LLMs super early, using them daily since 2022; so that may have bolstered the way I’ve augmented my approaches and tooling. Now almost everything I build uses AI at runtime to generate better tools for my AI to generate tools at runtime.
cheevly commented on Show HN: Spegel, a Terminal Browser That Uses LLMs to Rewrite Webpages   simedw.com/2025/06/23/int... · Posted by u/simedw
cheevly · 2 months ago
Very cool! My retired AI agent transformed live webpage content, here's an old video clip of transforming HN to My Little Pony (with some annoying sounds): https://www.youtube.com/watch?v=1_j6cYeByOU. Skip to ~37 seconds for the outcome. I made an open-source standalone Chrome extension as well, it should probably still work for anyone curious: https://github.com/joshgriffith/ChromeGPT
cheevly commented on Everyone Mark Zuckerberg has hired so far for Meta's 'superintelligence' team   wired.com/story/mark-zuck... · Posted by u/mji
weird_trousers · 2 months ago
So much wasted money it makes me sick…

There are so much money needed to solve another problems, especially for health.

I don’t blame the new comers, but Zuckerberg.

cheevly · 2 months ago
Do you realize how much health-related research Zuckerburg’s foundation does? There was literally a post on here last week about it, geez.
cheevly commented on There are no new ideas in AI, only new datasets   blog.jxmo.io/p/there-are-... · Posted by u/bilsbie
Kapura · 2 months ago
Here's an idea: make the AIs consistent at doing things computers are good at. Here's an anecdote from a friend who's living in Japan:

> i used chatgpt for the first time today and have some lite rage if you wanna hear it. tldr it wasnt correct. i thought of one simple task that it should be good at and it couldnt do that.

> (The kangxi radicals are neatly in order in unicode so you can just ++ thru em. The cjks are not. I couldnt see any clear mapping so i asked gpt to do it. Big mess i had to untangle manually anyway it woulda been faster to look them up by hand (theres 214))

> The big kicker was like, it gave me 213. And i was like, "why is one missing?" Then i put it back in and said count how many numbers are here and it said 214, and there just werent. Like come on you SHOULD be able to count.

If you can make the language models actually interface with what we've been able to do with computers for decades, i imagine many paths open up.

cheevly · 2 months ago
Many of us have solved this with internal tooling that has not yet been shared or released to the public.
cheevly commented on I will fix your vibe-coded MVP – sgnt.ai   sgnt.ai/p/vibe-coded/... · Posted by u/yeahyeah321
cheevly · 2 months ago
Quality cringe

u/cheevly

KarmaCake day5August 12, 2021
About
Im building a natural language software runtime.
View Original