Deleted Comment
Volume isnt even your main issue here. YouTube ads are powered by adwords... that all advertisers already use. It comes with tracking and user-analytics built in.
You can't compete with YouTube by replicating this business model.
Even so.. direct YouTube ad revenue per view is low. Many successful tubers monetize with sponsors. That is replicable, if a (single) tuber has enough views.
I think there can be markets for smaller, paid video sites... but that's not really a competitor to YouTube. It's more like competition for substack.
The way YouTube is managed, including all the reasons for criticism, are why it is successful.
Legible rules have loopholes. Keeping advertisers "on their toes" with mystery rules is a strategy.
It makes sense to keep the platform as unoffensive as possible. Strict nudity rules, and other such "hard" rules. Demonetization gives yotube a chance to implement soft/illegible rules... many of them simply assumed or imagined. It also makes business sense to suppress politics a little. The chilling effect is intentional.. and understandable.
Honestly, I think the more open alternative to YouTube is podcasting. Podcasting has terrible discovery, and video is underdeveloped but... it also has persistence that proves it is a good platform.
Half of "the problem" with YouTube is Google running the platform and pursuing their own interests. These are somewhat restrictive, but they also make sense.
The other half is intense competition for daily attention. That's what a low friction, highly accessible platform does. You can't have everything.
Without all the restrictions and manipulations that YouTube do, the platforms would be 100% nudity, scandals and suchlike.
> The key thing is to develop an intuition for questions it can usefully answer vs questions that are at a level of detail where the lossiness matters
the problem is that in order to develop an intuition for questions that LLMs can answer, the user will at least need to know something about the topic beforehand. I believe that this lack of initial understanding of the user input is what can lead to taking LLM output as factual. If one side of the exchange knows nothing about the subject, the other side can use jargon and even present random facts or lossy facts which can almost guarantee to impress the other side.
> The way to solve this particular problem is to make a correct example available to it.
My question is how much effort would it take to make a correct example available for the LLM before it can output quality and useful data? If the effort I put in is more than what I would get in return, then I feel like it's best to write and reason it myself.
I think there's a parallel here for the internet as an i formation source. It delivered on "unlimited knowledge at the tip of everyone's fingertips" but lowering the bar also lowered the bar.
That access "works" only when the user is capable of doing their part too. Evaluating sources, integrating knowledge. Validating. Cross examining.
Now we are just more used to recognizing that accessibility comes with its own problem.
Some of this is down to general education. Some to domain expertize. Personality plays a big part.
The biggest factor is, i think, intelligence. There's a lot of 2nd and 3rd order thinking required to simultaneously entertain a curiosity, consider of how the LLM works, and exercise different levels of skepticism depending on the types of errors LLMs are likely to make.
Using LLMs correctly and incorrectly is.. subtle.
I think hacker news manages to be ok since it doesn’t rely on advertising which makes it much more palatable.
I think a lot of the ills of social media are ills of the medium itself... once it reaches "everyone scale," game theory maturity and whatnot.
Anyway the way past it is probably to go past it... and onto the next medium. Back is rarely an available option.
On that note... its curious that Digg now describes itself as a "community platform," not a social network. Ironic, considering they bought the name "digg."
Speaks to the "late stage social media" meme.
A compilation of claims, takes, narratives, shills, expectations and predictions from the late 90s "information superhighway" era.
I wonder if LLMs can produce this.
A lot of the dotcom exuberance was famously "correct, but off by 7 years." But... most of it flat wrong. Right but early applies mostly to the meta investment case: "the internet business will be big."
One that stands out in my memory is "turning billion dollar industries into million dollar industries."
With ubiquitous networked computers, banking and financial services could become "mostly software." Banks and whatnot would all become hyper-efficient Vanguard-like companies.
We often starts with an observation that economies are efficiency seeking. Then we imagine the most efficient outcome given legible constraints of technology, geography and whatnot. Then we imagine dynamics and tensions in a world with that kind of efficiency.
This, incidentally, is also "historical materialism." Marx had a lot of awe for modern industry, the efficiency of capitalism and whatnot. Almost Adam Smith-like... at times.
Anyway... this never actually works out. The meta is a terrible predictor of where things will go.
Imagine law gets more efficient. Will we have more or less lawyers? It could go either way.
As a user, it feels like the race has never been as close as it is now. Perhaps dumb to extrapolate, but it makes me lean more skeptical about the hard take-off / winner-take-all mental model that has been pushed.
Would be curious to hear the take of a researcher at one of these firms - do you expect the AI offerings across competitors to become more competitive and clustered over the next few years, or less so?
Part of the fun is that predictions get tested on short enough timescales to "experience" in a satisfying way.
Idk where that puts me, in my guess at "hard takeoff." I was reserved/skeptical about hard takeoff all along.
Even if LLMs had improved at a faster rate... I still think bottlenecks are inevitable.
That said... I do expect progress to happen in spurts anyway. It makes sense that companies of similar competence and resources get to a similar place.
The winner take all thing is a little forced. "Race to singularity" is the fun, rhetorical version of the investment case. The implied boring case is facebook, adwords, aws, apple, msft... IE the modern tech sector tends to create singular big winners... and therefore our pre-revenue market cap should be $1trn.
>I write code with AI tools. I expect my team to use AI tools too. If you know the codebase and know what you're doing, writing great code has never been easier than with these tools.
This statement describes transitional state. Contributora became qualified in this way before AI. New contributors using Ai from day one will not be qualified in the same way.
>The question is more fundamental. In a world of AI coding assistants, is code from external contributors actually valuable at all?
...
>When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.
The negative net value of external contributiona is to make the decision. End external contributions.
For the purpose of thinking up a new model.. unpacking that net is the interesting part. I don't mean sorting between high and low effort contributions. I mean making productive use of low effort one-shots.
AI tools have moved the old bottlenecks and we are trying to find where the new ones are going to settle down.