Readit News logoReadit News
stoneyhrm1 commented on We gave 5 LLMs $100K to trade stocks for 8 months   aitradearena.com/research... · Posted by u/cheeseblubber
irishcoffee · 14 days ago
> But wanted to give everyone a way to see how these models think…

Think? What exactly did “it” think about?

stoneyhrm1 · 14 days ago
"Pass the salt? You mean pass the sodium chloride?"
stoneyhrm1 commented on The RAG Obituary: Killed by agents, buried by context windows   nicolasbustamante.com/p/t... · Posted by u/nbstme
stoneyhrm1 · 3 months ago
I'm free to be corrected because I'm no expert in the field but isn't RAG just enriching context, it doesn't have to be semantic search, it could be an API call or grabbing info from a database.
stoneyhrm1 commented on Vibe coding cleanup as a service   donado.co/en/articles/202... · Posted by u/sjdonado
HarHarVeryFunny · 3 months ago
It's a bit strange that Karpathy's "vibe coding" ever gained traction as a concept, although perhaps only among those without enough experience to know better.

As I understand it, what Karpathy was referring to as "vibe coding" was some sort of flow state "just talk to the AI, never look back" thing. Don't look at the generated code, just feel the AGI vibes ...

It sounds absolutely horrific if you care even the tiniest bit about the quality of what you are building! Good for laughs and "AGI is here!" Twitter posts, maybe for home projects and throwaway scripts, but as a way of developing serious software ?!!!

I think part of the reason this has taken off (other than the cool sounding name) is because it came from Karpathy. The same idea from anyone less well known would have probably been shot down.

I've seen junior developers (and even not so junior), pre-AI, code in this kind of way - copy some code from someplace and just hack it until it works. Got a nasty core dump bug? - just reorder your source code until it goes away. At minimum in a corporate environment this way or working would get you talked to, if not put on a performance plan or worse!

stoneyhrm1 · 3 months ago
Like it or not vibe coding is here to stay, I too don't agree with the concept but have told people in my org that I've 'vibe coded' this or 'vibe coded' that. To us it just means we used AI to write most of the code.

I would never have it be put into production without any type of review though, it's more for "I vibe coded this cool app, take a look, maybe this can be something bigger..."

stoneyhrm1 commented on Finding thousands of exposed Ollama instances using Shodan   blogs.cisco.com/security/... · Posted by u/rldjbpin
reilly3000 · 3 months ago
If any MCP servers are running, anyone with access to query the chat endpoint can use them. That could include file system access, GitHub tokens and more.
stoneyhrm1 · 3 months ago
The LLM endpoint via ollama or huggingface is not the one executing MCP tool calls, that is on behalf of the client that is interacting with the LLM. All the LLM does is take input as a prompt and produce a text output, that's it. Anything else is just a wrapper.
stoneyhrm1 commented on Finding thousands of exposed Ollama instances using Shodan   blogs.cisco.com/security/... · Posted by u/rldjbpin
stoneyhrm1 · 3 months ago
I understand the concern here but isn't this the same as making any other type of server public? This is just regarding servers hosting LLMs, which I wouldn't even consider a huge security concern vs hosting a should-be-internal tool publicly.

Servers that shouldn't be made public are made public, a cyber tale as old as time.

stoneyhrm1 commented on GPT-5   openai.com/gpt-5/... · Posted by u/rd
illiac786 · 4 months ago
Yeah I think that throwing more and more compute at the same training data produces smaller and smaller gains.

Maybe quantum compute would be significant enough of a computing leap to meaningfully move the needle again.

stoneyhrm1 · 4 months ago
What exactly is being moved? It's trained on human data, you can't make code more perfect than what is written out there by a human.
stoneyhrm1 commented on GPT-5   openai.com/gpt-5/... · Posted by u/rd
somenameforme · 4 months ago
If AGI is ever achieved, it would open the door to recursive self improvement that would presumably rapidly exceed human capability across any and all fields, including AI development. So the AI would be improving itself while simultaneously also making revolutionary breakthroughs in essentially all fields. And, for at least a while, it would also presumably be doing so at an exponentially increasing rate.

But I think we're not even on the path to creating AGI. We're creating software that replicate and remix human knowledge at a fixed point in time. And so it's a fixed target that you can't really exceed, which would itself already entail diminishing returns. Pair this with the fact that it's based on neural networks which also invariably reach a point of sharply diminishing returns in essentially every field they're used in, and you have something that looks much closer to what we're doing right now - where all competitors will eventually converge on something largely indistinguishable from each other, in terms of ability.

stoneyhrm1 · 4 months ago
> revolutionary breakthroughs in essentially all field

This doesn't really make sense outside computers. Since AI would be training itself, it needs to have the right answers, but as of now it doesn't really interact with the physical world. The most it could do is write code, and check things that have no room for interpretation, like speed, latency, percentage of errors, exceptions, etc.

But, what other fields would it do this in? How can it makes strives in biology, it can't dissect animals, it can't figure more out about plants that humans feed into the training data. Regarding math, math is human-defined. Humans said "addition does this", "this symbol means that", etc.

I just don't understand how AI could ever surpass anything human known before we live by the rules defined by us.

stoneyhrm1 commented on I'm Archiving Picocrypt   github.com/Picocrypt/Pico... · Posted by u/jaden
rikafurude21 · 4 months ago
I dont think many people would be excited at the thought of going from handcrafted artisan knitting to babying machines in the knitting factory. You need a certain type of autism to be into the latter.
stoneyhrm1 · 4 months ago
I'd think it would be more autistic to continue to use and have interest in something that's been superseded by something far more easier and efficient.

Who would you think is weirder, the person still obsessed with horse & buggies, or the person obsessed with cars?

stoneyhrm1 commented on I'm Archiving Picocrypt   github.com/Picocrypt/Pico... · Posted by u/jaden
stoneyhrm1 · 4 months ago
I understand the author's sentiment but industries don't exist solely because somebody wants them to. I mean, sure, hobbies can exist, but you won't be paid well (or even at all) to work with them.

Software engineering pays because companies want people to develop software. It pays so well because it's hard, but the coding portion is become easier. Vibe coding and AI is here to stay, the author can choose to ignore it and go preach to a dying field (specifically, writing code, not CS), or embrace it. We should be happy we no longer need to type away if and for loops 20 times and instead can focus on high level architecture.

stoneyhrm1 commented on A non-anthropomorphized view of LLMs   addxorrol.blogspot.com/20... · Posted by u/zdw
cmenge · 5 months ago
I kinda agree with both of you. It might be a required abstraction, but it's a leaky one.

Long before LLMs, I would talk about classes / functions / modules like "it then does this, decides the epsilon is too low, chops it up and adds it to the list".

The difference I guess it was only to a technical crowd and nobody would mistake this for anything it wasn't. Everybody know that "it" didn't "decide" anything.

With AI being so mainstream and the math being much more elusive than a simple if..then I guess it's just too easy to take this simple speaking convention at face value.

EDIT: some clarifications / wording

stoneyhrm1 · 5 months ago
I mean you can boil anything down to it's building blocks and make it seem like it didn't 'decide' anything. When you as a human decide something, your brain and it's neurons just made some connections with an output signal sent to other parts that resulting in your body 'doing' something.

I don't think LLMs are sentient or any bullshit like that, but I do think people are too quick to write them off before really thinking about how a nn 'knows things' similar to how a human 'knows' things, it is trained and reacts to inputs and outputs. The body is just far more complex.

u/stoneyhrm1

KarmaCake day28April 29, 2025View Original