Readit News logoReadit News
taormina commented on I Am An AI Hater   anthonymoser.github.io/wr... · Posted by u/BallsInIt
sodapopcan · a day ago
Post was flagged, huh.
taormina · 12 hours ago
This shouldn't be flagged....
taormina commented on I Am An AI Hater   anthonymoser.github.io/wr... · Posted by u/BallsInIt
marcosdumay · a day ago
It's interesting that the social reaction started to surface as soon as the companies failed to get more investment and decided to increase prices.

I know it was there the entire time, so what exactly was suppressing the attention towards it? Was it satisfied customers or the companies paying to deplatform the message?

taormina · a day ago
Are they running out of funds to drown out the protesters with their own marketing?
taormina commented on Scamlexity: When agentic AI browsers get scammed   guard.io/labs/scamlexity-... · Posted by u/mindracer
anal_reactor · 4 days ago
Imagine an agent being a roommate. They see that toilet paper is running out, they go to the supermarket, they buy more, they charge you money. All without you saying a word. Sure, it might not be your favorite brand, or the price might not be optimal, but realistically, the convenience of not having to think about buying toilet paper is definitely worth the price of having your roommate choose the details. After all, it's unlikely they'll make a catastrophically bad decision.

This idea has been tried before and it failed not because the core concept is bad (it isn't), but because implementation details were wrong, and now we have better tools to execute it.

taormina · 4 days ago
The idea has been tried before and it failed because people don’t actually want this product at the scale the inventors thought. Amazon has never stopped doing this. Adding an element of indeterminism to the mix doesn’t make this a better product. Imagine what the LLM is going to hallucinate with your credit card attached.
taormina commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
danenania · 5 days ago
It can definitely be difficult and frustrating to try to use LLMs in a large codebase—no disagreement there. You have to be very selective about the tasks you give them and how they are framed. And yeah, you often need to throw away what they produced when they go in the wrong direction.

None of that means they’re getting worse though. They’re getting better; they’re just not as good as you want them to be.

taormina · 5 days ago
I mean, this really isn't a large codebase, this is a small-medium sized codebase as judged by prior jobs/projects. 9000 lines of code?

When I give them the same task I tried to give them the day before, and the output gets noticeably worse than their last model version, is that better? When the day by day performance feels like it's degrading?

They are definitely not as good as I would like them to be but that's to be expected of professionals who beg for money hyping them up.

taormina commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
danenania · 5 days ago
I see a lot of people saying things like this, and I’m not really sure which planet you all are living on. I use LLMs nearly every day, and they clearly keep getting better.
taormina · 5 days ago
Grok hasn't gotten better. OpenAI hasn't gotten better. Claude Code with Opus and Sonnet I swear are getting actively worse. Maybe you only use them for toy projects, but attempting to get them to do real work in my real codebase is an exercise in frustration. Yes, I've done meaningful prompting work, and I've set up all the CLAUDE.md files, and then it proceeds to completely ignores everything I said, all of the context I gave, and just craps out something completely useless. It has accomplished a small amount of meaningful work, exactly enough that I think I'm neutral instead of in the negative in terms of work:time if I have just done it all myself.

I get to tell myself that it's worth it because at least I'm "keeping up with the industry" but I honestly just don't get the hype train one bit. Maybe I'm too senior? Maybe the frameworks I use, despite being completely open source and available as training data for every model on the planet are too esoteric?

And then the top post today on the front page is telling me that my problem is that I'm bothering to supervise and that I should be writing an agent framework so that it can spew out the crap in record time..... But I need to know what is absolute garbage and what needs to be reverted. I will admit that my usual pattern has been to try and prompt it into better test coverage/specific feature additions/etc on the nights and weekends, and then I focus my daytime working hours on reviewing what was produced. About half the time I review it and have to heavily clean it up to make it usable, but more often than not, I revert the whole thing and just start on it myself from scratch. I don't see how this counts as "better".

taormina commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
dcre · 5 days ago
The first premise of the argument is that LLMs are plateauing in capability and this is obvious from using them. It is not obvious to me.
taormina · 5 days ago
Just ancedata, but they keep releasing new versions and it keeps not being better. What would you describe this as if not plateauing? Worsening?
taormina commented on 95% of Companies See 'Zero Return' on $30B Generative AI Spend   thedailyadda.com/95-of-co... · Posted by u/speckx
lacy_tinpot · 8 days ago
Why do people so desperately want to see AI fail?
taormina · 8 days ago
Why do people so desperately want to see AI succeed? The financial investment explains it for some.
taormina commented on 95% of Companies See 'Zero Return' on $30B Generative AI Spend   thedailyadda.com/95-of-co... · Posted by u/speckx
tovej · 8 days ago
The shovel business is good as long as the gold rush lasts. Once the gold rush is over, you're going to have to deal with a significant decrease in volume, unless you can find other customers.

Crypto's over, gaming isn't a large enough market to fill the hole, the only customers that could fill the demand would be military projects. Considering the arms race with China, and the many military applications of AI, that seems the most likely to me. That's not a pleasant thought, of course.

The alternative is a massive crash of the stock price, and considering the fact that NVIDIA makes up 8% of everyone's favorite index, that's not a very pleasant alternative either.

It seems to me that an ultra-financialized economy has trouble with controlled deceleration, once the hypetrain is on it's full-throttle until you hit a wall.

taormina · 8 days ago
There aren’t enough GPUs for average gamers to buy anything vaguely recent and they would love to be able to. Making the best GPUs on the planet is still huge and the market is quite large. Scalping might finally die at this rate, but NVDA wasn’t making any of the scalping money anyway so who cares? Data centers and gamers still need every GPU NVDA can make.
taormina commented on Vibe coding tips and tricks   github.com/awslabs/mcp/bl... · Posted by u/mooreds
moolcool · 11 days ago
AI is cool and all, but the biggest thing that makes me think that we’re in a bit of a bubble is seeing otherwise conservative organizations take “vibe coding” seriously
taormina · 11 days ago
They get paid the more vibe coding occurs on their platform, so of course they have a two-pizza team dedicated to milking the latest trend.
taormina commented on AI is different   antirez.com/news/155... · Posted by u/grep_it
motorest · 13 days ago
> I'd love a source to these claims.

Have you been living under a rock?

You can start getting up to speed by how Amazon's CEO already laid out the company's plan.

https://www.thecooldown.com/green-business/amazon-generative...

> (...) AI is just a scapegoat to counteract the reckless overhiring due to (...)

That is your personal moralist scapegoat, and one that you made up to feel better about how jobs are being eliminated because someone somewhere screwed up.

In the meantime, you fool yourself and pretend that sudden astronomic productivity gains have no impact on demand.

taormina · 13 days ago
Congratulations for believing the marketing. He has about 2.46 trillion reasons to make this claim. In other news, water is wet and the sky is blue.

u/taormina

KarmaCake day660February 21, 2013
About
https://taormina.io
View Original