Readit News logoReadit News
cornel_io commented on Total monthly number of StackOverflow questions over time   data.stackexchange.com/st... · Posted by u/maartin0
llbeansandrice · a month ago
Am I the only one that sees this as a hellscape?

No longer interacting with your peers but an LLM instead? The knowledge centralized via telemetry and spying on every user’s every interaction and only available thru a enshitified subscription to a model that’s been trained on this stolen data?

cornel_io · a month ago
Asking questions on SO was an exercise in frustration, not "interacting with peers". I've never once had a productive interaction there, everything I've ever asked was either closed for dumb reasons or not answered at all. The library of past answers was more useful, but fell off hard for more recent tech, I assume because people all were having the same frustrations as I was and just stopped going there to ask anything.

I have plenty of real peers I interact with, I do not need that noise when I just need a quick answer to a technical question. LLMs are fantastic for this use case.

cornel_io commented on Resistance training load does not determine hypertrophy   physoc.onlinelibrary.wile... · Posted by u/Luc
landl0rd · a month ago
Fifty is excessive but you’re better-served doing 12-20 reps more than fewer, heavier reps if you’re pushing hypertrophy and already well-trained.
cornel_io · a month ago
This article claims that's false, that 8-12 at higher weight leads to the same result as 20+ at lower weights.
cornel_io commented on Measuring AI Ability to Complete Long Tasks   metr.org/blog/2025-03-19-... · Posted by u/spicypete
simonw · 2 months ago
I don't buy it.

I think that could work, but it can work in the same way that plenty of big companies have codebases that are a giant ball of mud and yet they somehow manage to stay in business and occasionally ship a new feature.

Meanwhile their rivals with well constructed codebases who can promptly ship features that work are able to run rings around them.

I expect that we'll learn over time that LLM-managed big ball of mud codebases are less valuable than LLM-managed high quality well architected long-term maintained codebases.

cornel_io · 2 months ago
And at the end of the day it's not really a tradeoff we'll need to make, anyways: my experience with e.g. Claude Code is that every model iteration gets much better at avoiding balls of mud, even without tons of manual guidance and pleading.

I get that even now it's very easy to let stuff get out of hand if you aren't paying close attention yourself to the actual code, so people assume that it's some fundamental limitation of all LLMs. But it's not, much like 6 fingered hands was just a temporary state, not anything deep or necessary that was enforced by the diffusion architecture.

cornel_io commented on I failed to recreate the 1996 Space Jam website with Claude   j0nah.com/i-failed-to-rec... · Posted by u/thecr0w
measurablefunc · 2 months ago
This is known as the data processing inequality. Non-invertible functions can not create more information than what is available in their inputs: https://blog.blackhc.net/2023/08/sdpi_fsvi/. Whatever arithmetic operations are involved in laundering the inputs by stripping original sources & references can not lead to novelty that wasn't already available in some combination of the inputs.

Neural networks can at best uncover latent correlations that were already available in the inputs. Expecting anything more is basically just wishful thinking.

cornel_io · 2 months ago
Theoretical "proofs" of limitations like this are always unhelpful because they're too broad, and apply just as well to humans as they do to LLMs. The result is true but it doesn't actually apply any limitation that matters.
cornel_io commented on You should write an agent   fly.io/blog/everyone-writ... · Posted by u/tabletcorry
worldsayshi · 3 months ago
I feel like one small piece is missing to call it an agent? The ability to iterate in multiple steps until it feels like it's "done". What is the canonical way to do that? I suspect that implementing that in the wrong way could make it spiral.
cornel_io · 3 months ago
When a tool call completes the result is sent back to the LLM to decide what to do next, that's where it can decide to go do other stuff before returning a final answer. Sometimes people use structured outputs or tool calls to explicitly have the LLM decide when it's done, or allow it to send intermediate messages for logging to the user. But the simple loop there lets the LLM do plenty of it has good tools.
cornel_io commented on Survey: a third of senior developers say over half their code is AI-generated   fastly.com/blog/senior-de... · Posted by u/Brajeshwar
mgh95 · 5 months ago
Why mock at all? Spend the time making integration tests fast. There is little reason a database, queue, etc. can't be set up in a per-test group basis and be made fast. Reliable software is built upon (mostly) reliable foundations.
cornel_io · 5 months ago
There are thousands of projects out there that use mocks for various reasons, some good, some bad, some ugly. But it doesn't matter: most engineers on those projects do not have the option to go another direction, they have to push forward.
cornel_io commented on Flunking my Anthropic interview again   taylor.town/flunking-anth... · Posted by u/surprisetalk
ip26 · 6 months ago
It's never personal

You never screened candidates who couldn’t act their way out of a wet paper bag?

cornel_io · 6 months ago
Of course. But I've screened far more out because I was in a rush and got 40 resumes in that day and they just didn't pique my interest as much as the next one over.
cornel_io commented on US AI Action Plan   ai.gov/action-plan... · Posted by u/joelburget
gleenn · 7 months ago
Won't it be funny when someone finally gets to AGI and they realize it's about as smart as a normal person and they spent billions getting there? Of course you can speculate that it could improve. But what if something inherent in intelligence has a ceiling and caused it to be a super intelligent but mopey robot that just decides "why bother helping humans" and just lazes around like the pandas at the zoo.
cornel_io · 7 months ago
There may be a ceiling, sure. It's overwhelmingly unlikely that it's just about where humans ended up, though.
cornel_io commented on François Chollet: The Arc Prize and How We Get to AGI [video]   youtube.com/watch?v=5QcCe... · Posted by u/sandslash
TheAceOfHearts · 7 months ago
Getting a high score on ARC doesn't mean we have AGI and Chollet has always said as much AFAIK, it's meant to push the AI research space in a positive direction. Being able to solve ARC problems is probably a pre-requisite to AGI. It's a directional push into the fog of war, with the claim being that we should explore that area because we expect it's relevant to building AGI.
cornel_io · 7 months ago
I'm all for benchmarks that push the field forward, but ARC problems seem to be difficult for reasons having less to do with intelligence and more about having a text system that works reliably with rasterized pixel data presented line by line. Most people would score 0 on it if they were shown the data the way an LLM sees it, these problems only seem easy to us because there are visualizers slapped on top.

Deleted Comment

u/cornel_io

KarmaCake day917January 19, 2021View Original