Readit News logoReadit News
code51 commented on Google, Nvidia, and OpenAI   stratechery.com/2025/goog... · Posted by u/tambourine_man
code51 · 23 days ago
Actually Sourcegraph's "Amp Code" is testing out a free ad-supported coding agent. Here is a video showing how it works: https://ampcode.com/news/amp-free

"Supported by ads from developer tool partners we’ve carefully chosen"

It's not trying to secretly insert tools into LLM output but directly present the product offering inside the agent area.

At one point, I speculate that Cursor will test this out as well, probably in a more covert way so that tool use paths get modified. Once the industry realizes tool-use-ads, then we're toast.

code51 commented on AI is a front for consolidation of resources and power   chrbutler.com/what-ai-is-... · Posted by u/delaugust
SoftTalker · a month ago
I'm a SWE, DBA, SysAdmin, I work up and down the stack as needed. I'm not using LLMs at all. I really haven't tried them. I'm waiting for the dust to settle and clear "best practices" to emerge. I am sure that these tools are here to stay but I am also confident they are not in their final form today. I've seen too many hype trains in my career to still be jumping on them at the first stop.
code51 · a month ago
I'm surprised these pockets of job security still exist.

Know this: someone is coming after this already.

One day someone from management will hear about a cost-saving story at a dinner table, the words GPT, Cursor, Antigravity, reasoning, AGI will cause a buzzing in her ear. Waking up with tinnitus the next morning, they'll instantly schedule a 1:1 to discuss "the degree of AI use and automation"

code51 commented on AI Slop vs. OSS Security   devansh.bearblog.dev/ai-s... · Posted by u/mooreds
bob1029 · 2 months ago
I think one potential downside of using LLMs or exposing yourself to their generated content is that you may subconsciously adopt their quirks over time. Even if you aren't actively using AI for a particular task, prior exposure to their outputs could be biasing your thoughts.

This has additional layers to it as well. For example, I actively avoid using em dash or anything that resembles it right now. If I had no exposure to the drama around AI, I wouldn't even be thinking about this. I am constraining my writing simply to avoid the implication.

code51 · 2 months ago
Exactly and this is hell for programming.

You don't know whose style the LLM would pick for that particular prompt and project. You might end up with Carmack or maybe that buggy, test-failing piece of junk project on Github.

code51 commented on Backpropagation is a leaky abstraction (2016)   karpathy.medium.com/yes-y... · Posted by u/swatson741
mquander · 2 months ago
Karpathy talking for 2 hours about how he uses LLMs:

https://www.youtube.com/watch?v=EWvNQjAaOHw

code51 · 2 months ago
Vibing, not firing at his ML problems.

He's doing a capability check in this video (for the general audience, which is good of course), not attacking a hard problem in ML domain.

Despite this tweet: https://x.com/karpathy/status/1964020416139448359 , I've never seen him citing an LLM helped him out in ML work.

code51 commented on Backpropagation is a leaky abstraction (2016)   karpathy.medium.com/yes-y... · Posted by u/swatson741
stingraycharles · 2 months ago
First of all, this is not a competition between “are LLMs better than search”.

Secondly, the article is from 2016, ChatGPT didn’t exist back then

code51 · 2 months ago
I doubt he's letting LLM creep in to his decision-making in 2025, aside from fun side projects (vibes). We don't ever come across Karpathy going to an LLM or expressing that an LLM helped in any of his Youtube videos about building LLMs.

He's just test driving LLMs, nothing more.

Nobody's asking this core question in podcasts. "How much and how exactly are you using LLMs in your daily flow?"

I'm guessing it's like actors not wanting to watch their own movies.

code51 commented on 'A black hole': New graduates discover a dismal job market   nbcnews.com/business/econ... · Posted by u/koolba
code51 · 5 months ago
Junior jobs will come back when blitz-pricing of AI coding products end. Current bosses think these prices with 200/mo to "leave it and auto-code for the whole month, day and night" will stay like this. Of course it won't.

Typical startup play but in massive scale. Junior jobs might come back but not in bulk, still selective, very slowly.

Deleted Comment

code51 commented on Cognition (Devin AI) to Acquire Windsurf   cognition.ai/blog/windsur... · Posted by u/alazsengul
hn_throwaway_99 · 5 months ago
"Sooner or later the bubble's gonna burst" and "There's definitely something there" aren't mutually exclusive - in fact they often go together.

It makes me perhaps a little sad to say that "I'm showing my age" by bringing up the .com boom/bust, but this feels exactly the same. The late 90s/early 00s were the dawn of the consumer Internet, and all of that tech vastly changed global society and brought you companies like Google and Amazon. It also brought you Pets.com, Webvan, and the bajillion other companies chronicled in "Fucked Company".

You mention Anthropic, which I think is in a good a position as any to be one of the winners. I'm much less convinced about tons of the others. Look at Cursor - they were a first moving leader, but I know tons of people (myself included) who have cancelled their subscription because there are now better options.

code51 · 5 months ago
Anthropic is actually a good point to focus on since Claude is very good proof that it's not about the scaling. We are not quite there yet but we are "programming" through how we shape and filter the input data for training it seems. With time, we'll understand the methods to better represent.

Current situation doesn't sound too good for "scaling hypothesis" itself.

code51 commented on Generative AI's failure to induce robust models of the world   garymarcus.substack.com/p... · Posted by u/pmcjones
tim333 · 6 months ago
I usually disagree with Garry Marcus but his basic point seems fair enough if not surprising - Large Language Models model language about the world, not the world itself. For a human like understanding of the world you need some understanding of concepts like space, time, emotion, other creatures thoughts and so on, all things we pick up as kids.

I don't see much reason why future AI couldn't do that rather than just focusing on language though.

code51 · 6 months ago
The underlying assumption is that language and symbols are enough to represent phenomena. Maybe we are falling for this one in our own heads as well.

Understanding may not be a static symbolic representation. Contexts of the world infinite and continuously redefined. We believed we could represent all contexts tied to information, but that's a tough call.

Yes, we can approximate. No, we can't completely say we can represent every essential context at all times.

Some things might not be representable at all by their very chaotic nature.

code51 commented on Eleven v3   elevenlabs.io/v3... · Posted by u/robertvc
code51 · 7 months ago
High probability your v2 voice will break with this.

u/code51

KarmaCake day753February 8, 2012View Original