Readit News logoReadit News
netcan commented on Stay Away from My Trash   tldraw.dev/blog/stay-away... · Posted by u/EvgeniyZh
netcan · 6 days ago
I suppose this is banal/obvious to many, but I found this very interesting given the practical context.

>I write code with AI tools. I expect my team to use AI tools too. If you know the codebase and know what you're doing, writing great code has never been easier than with these tools.

This statement describes transitional state. Contributora became qualified in this way before AI. New contributors using Ai from day one will not be qualified in the same way.

>The question is more fundamental. In a world of AI coding assistants, is code from external contributors actually valuable at all?

...

>When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.

The negative net value of external contributiona is to make the decision. End external contributions.

For the purpose of thinking up a new model.. unpacking that net is the interesting part. I don't mean sorting between high and low effort contributions. I mean making productive use of low effort one-shots.

AI tools have moved the old bottlenecks and we are trying to find where the new ones are going to settle down.

netcan commented on Bring bathroom doors back to hotels   bringbackdoors.com/... · Posted by u/bariumbitmap
netcan · 3 months ago
If you're cofortable enough to share a hotel room, you should be willing to watch them poop. Lighten up.

Deleted Comment

netcan commented on YouTube is a mysterious monopoly   anderegg.ca/2025/09/08/yo... · Posted by u/geerlingguy
netcan · 5 months ago
>I also think it would take some doing to get advertisers to jump on a new platform when YouTube has almost all the viewers.

Volume isnt even your main issue here. YouTube ads are powered by adwords... that all advertisers already use. It comes with tracking and user-analytics built in.

You can't compete with YouTube by replicating this business model.

Even so.. direct YouTube ad revenue per view is low. Many successful tubers monetize with sponsors. That is replicable, if a (single) tuber has enough views.

I think there can be markets for smaller, paid video sites... but that's not really a competitor to YouTube. It's more like competition for substack.

The way YouTube is managed, including all the reasons for criticism, are why it is successful.

Legible rules have loopholes. Keeping advertisers "on their toes" with mystery rules is a strategy.

It makes sense to keep the platform as unoffensive as possible. Strict nudity rules, and other such "hard" rules. Demonetization gives yotube a chance to implement soft/illegible rules... many of them simply assumed or imagined. It also makes business sense to suppress politics a little. The chilling effect is intentional.. and understandable.

Honestly, I think the more open alternative to YouTube is podcasting. Podcasting has terrible discovery, and video is underdeveloped but... it also has persistence that proves it is a good platform.

Half of "the problem" with YouTube is Google running the platform and pursuing their own interests. These are somewhat restrictive, but they also make sense.

The other half is intense competition for daily attention. That's what a low friction, highly accessible platform does. You can't have everything.

Without all the restrictions and manipulations that YouTube do, the platforms would be 100% nudity, scandals and suchlike.

netcan commented on An LLM is a lossy encyclopedia   simonwillison.net/2025/Au... · Posted by u/tosh
quincepie · 5 months ago
I totally agree with the author. Sadly, I feel like that's not what the majority of LLM users tend to view LLMs. And it's definitely not what AI companies marketing.

> The key thing is to develop an intuition for questions it can usefully answer vs questions that are at a level of detail where the lossiness matters

the problem is that in order to develop an intuition for questions that LLMs can answer, the user will at least need to know something about the topic beforehand. I believe that this lack of initial understanding of the user input is what can lead to taking LLM output as factual. If one side of the exchange knows nothing about the subject, the other side can use jargon and even present random facts or lossy facts which can almost guarantee to impress the other side.

> The way to solve this particular problem is to make a correct example available to it.

My question is how much effort would it take to make a correct example available for the LLM before it can output quality and useful data? If the effort I put in is more than what I would get in return, then I feel like it's best to write and reason it myself.

netcan · 5 months ago
>the problem is that in order to develop an intuition for questions that LLMs can answer, the user will at least need to know something about the topic beforehand. I believe that this lack of initial understanding of the user input

I think there's a parallel here for the internet as an i formation source. It delivered on "unlimited knowledge at the tip of everyone's fingertips" but lowering the bar also lowered the bar.

That access "works" only when the user is capable of doing their part too. Evaluating sources, integrating knowledge. Validating. Cross examining.

Now we are just more used to recognizing that accessibility comes with its own problem.

Some of this is down to general education. Some to domain expertize. Personality plays a big part.

The biggest factor is, i think, intelligence. There's a lot of 2nd and 3rd order thinking required to simultaneously entertain a curiosity, consider of how the LLM works, and exercise different levels of skepticism depending on the types of errors LLMs are likely to make.

Using LLMs correctly and incorrectly is.. subtle.

netcan commented on Digg.com is back   digg.com/... · Posted by u/thatgerhard
haburka · 6 months ago
I think that social media has been a massive experiment where we asked, what if we let capital interests subvert our desire for community to get us to watch ads? And we have learned that it’s just not a good idea. I think perhaps Digg was one of the better ones but I solemnly wish social media was mostly illegal, especially advertising based, for profit sites.

I think hacker news manages to be ok since it doesn’t rely on advertising which makes it much more palatable.

netcan · 6 months ago
Im not sure that advertising specifically is the issue.

I think a lot of the ills of social media are ills of the medium itself... once it reaches "everyone scale," game theory maturity and whatnot.

Anyway the way past it is probably to go past it... and onto the next medium. Back is rarely an available option.

On that note... its curious that Digg now describes itself as a "community platform," not a social network. Ironic, considering they bought the name "digg."

Speaks to the "late stage social media" meme.

netcan commented on Do things that don't scale, and then don't scale   derwiki.medium.com/do-thi... · Posted by u/derwiki
derwiki · 6 months ago
Yep, totally — the OG advice was founder-focused. I just couldn’t resist twisting it a bit, because the line itself is too good not to repurpose.
netcan · 6 months ago
Yeah... i think that's pretty clear right from your title.
netcan commented on AI is different   antirez.com/news/155... · Posted by u/grep_it
netcan · 6 months ago
Here's what I want.

A compilation of claims, takes, narratives, shills, expectations and predictions from the late 90s "information superhighway" era.

I wonder if LLMs can produce this.

A lot of the dotcom exuberance was famously "correct, but off by 7 years." But... most of it flat wrong. Right but early applies mostly to the meta investment case: "the internet business will be big."

One that stands out in my memory is "turning billion dollar industries into million dollar industries."

With ubiquitous networked computers, banking and financial services could become "mostly software." Banks and whatnot would all become hyper-efficient Vanguard-like companies.

We often starts with an observation that economies are efficiency seeking. Then we imagine the most efficient outcome given legible constraints of technology, geography and whatnot. Then we imagine dynamics and tensions in a world with that kind of efficiency.

This, incidentally, is also "historical materialism." Marx had a lot of awe for modern industry, the efficiency of capitalism and whatnot. Almost Adam Smith-like... at times.

Anyway... this never actually works out. The meta is a terrible predictor of where things will go.

Imagine law gets more efficient. Will we have more or less lawyers? It could go either way.

netcan commented on What's the strongest AI model you can train on a laptop in five minutes?   seangoedecke.com/model-on... · Posted by u/ingve
zarzavat · 6 months ago
Instead of time it should be energy. What is the best model you can train with a given budget in Joules. Then the MBP and the H100 are on a more even footing.
netcan · 6 months ago
They're all good. Being somewhat arbitrary isnt a bad thing.
netcan commented on GPT-5   openai.com/gpt-5/... · Posted by u/rd
highfrequency · 6 months ago
It is frequently suggested that once one of the AI companies reaches an AGI threshold, they will take off ahead of the rest. It's interesting to note that at least so far, the trend has been the opposite: as time goes on and the models get better, the performance of the different company's gets clustered closer together. Right now GPT-5, Claude Opus, Grok 4, Gemini 2.5 Pro all seem quite good across the board (ie they can all basically solve moderately challenging math and coding problems).

As a user, it feels like the race has never been as close as it is now. Perhaps dumb to extrapolate, but it makes me lean more skeptical about the hard take-off / winner-take-all mental model that has been pushed.

Would be curious to hear the take of a researcher at one of these firms - do you expect the AI offerings across competitors to become more competitive and clustered over the next few years, or less so?

netcan · 6 months ago
Its certainly an interesting race to watch.

Part of the fun is that predictions get tested on short enough timescales to "experience" in a satisfying way.

Idk where that puts me, in my guess at "hard takeoff." I was reserved/skeptical about hard takeoff all along.

Even if LLMs had improved at a faster rate... I still think bottlenecks are inevitable.

That said... I do expect progress to happen in spurts anyway. It makes sense that companies of similar competence and resources get to a similar place.

The winner take all thing is a little forced. "Race to singularity" is the fun, rhetorical version of the investment case. The implied boring case is facebook, adwords, aws, apple, msft... IE the modern tech sector tends to create singular big winners... and therefore our pre-revenue market cap should be $1trn.

u/netcan

KarmaCake day15885May 29, 2008View Original